An Infinitely Large Napkin

http://web.evanchen.cc/napkin.html
Evan Chen
Version: html-alpha

When introduced to a new idea, always ask why you should care.
Do not expect an answer right away, but demand one eventually.

— Ravi Vakil [?]

_______________________________________________________________________________________________

If you like this book and want to support me, please consider buying me a coffee!

PIC
http://ko-fi.com/evanchen/

For Brian and Lisa, who finally got me to write it.
© 2021 Evan Chen. Some rights reserved; check the GitHub.
This is (still!) an incomplete draft. Please send corrections, comments, pictures of kittens, etc. to evan@evanchen.cc, or pull-request at https://github.com/vEnhance/napkin.
Last updated February 26, 2021.

Preface

The origin of the name “Napkin” comes from the following quote of mine.

I’ll be eating a quick lunch with some friends of mine who are still in high school. They’ll ask me what I’ve been up to the last few weeks, and I’ll tell them that I’ve been learning category theory. They’ll ask me what category theory is about. I tell them it’s about abstracting things by looking at just the structure-preserving morphisms between them, rather than the objects themselves. I’ll try to give them the standard example Grp, but then I’ll realize that they don’t know what a homomorphism is. So then I’ll start trying to explain what a homomorphism is, but then I’ll remember that they haven’t learned what a group is. So then I’ll start trying to explain what a group is, but by the time I finish writing the group axioms on my napkin, they’ve already forgotten why I was talking about groups in the first place. And then it’s 1PM, people need to go places, and I can’t help but think:
“Man, if I had forty hours instead of forty minutes, I bet I could actually have explained this all”.

This book was initially my attempt at those forty hours, but has grown considerably since then.

About this book

The Infinitely Large Napkin is a light but mostly self-contained introduction to a large amount of higher math.

I should say at once that this book is not intended as a replacement for dedicated books or courses; the amount of depth is not comparable. On the flip side, the benefit of this “light” approach is that it becomes accessible to a larger audience, since the goal is merely to give the reader a feeling for any particular topic rather than to emulate a full semester of lectures.

I initially wrote this book with talented high-school students in mind, particularly those with math-olympiad type backgrounds. Some remnants of that cultural bias can still be felt throughout the book, particularly in assorted challenge problems which are taken from mathematical competitions. However, in general I think this would be a good reference for anyone with some amount of mathematical maturity and curiosity. Examples include but certainly not limited to: math undergraduate majors, physics/CS majors, math PhD students who want to hear a little bit about fields other than their own, advanced high schoolers who like math but not math contests, and unusually intelligent kittens fluent in English.

Source code

The project is hosted on GitHub at https://github.com/vEnhance/napkin. Pull requests are quite welcome! I am also happy to receive suggestions and corrections by email.

Philosophy behind the Napkin approach

As far as I can tell, higher math for high-school students comes in two flavors:

Presumably you already know how unsatisfying the first approach is. So the second approach seems to be the default, but I really think there should be some sort of middle ground here.

Unlike university, it is not the purpose of this book to train you to solve exercises or write proofs1 , or prepare you for research in the field. Instead I just want to show you some interesting math. The things that are presented should be memorable and worth caring about. For that reason, proofs that would be included for completeness in any ordinary textbook are often omitted here, unless there is some idea in the proof which I think is worth seeing. In particular, I place a strong emphasis over explaining why a theorem should be true rather than writing down its proof. This is a recurrent theme of this book:

Natural explanations supersede proofs.

My hope is that after reading any particular chapter in Napkin, one might get the following out of it:

Understanding “why” something is true can have many forms. This is sometimes accomplished with a complete rigorous proof; in other cases, it is given by the idea of the proof; in still other cases, it is just a few key examples with extensive cheerleading.

Obviously this is nowhere near enough if you want to e.g. do research in a field; but if you are just a curious outsider, I hope that it’s more satisfying than the elevator pitch or Wikipedia articles. Even if you do want to learn a topic with serious depth, I hope that it can be a good zoomed-out overview before you really dive in, because in many senses the choice of material is “what I wish someone had told me before I started”.

More pedagogical comments and references

The preface would become too long if I talked about some of my pedagogical decisions chapter by chapter, so ??  contains those comments instead.

In particular, I often name specific references, and the end of that appendix has more references. So this is a good place to look if you want further reading.

Historical and personal notes

I began writing this book in December of 2014, after having finished my first semester of undergraduate at Harvard. It became my main focus for about 18 months after that, as I became immersed in higher math. I essentially took only math classes, (gleefully ignoring all my other graduation requirements) and merged as much of it as I could (as well as lots of other math I learned on my own time) into the Napkin.

Towards August of 2016, though, I finally lost steam. The first public drafts went online then, and I decided to step back. Having burnt out slightly, I then took a break from higher math, and spent the remaining two undergraduate years2 working extensively as a coach for the American math olympiad team, and trying to spend as much time with my friends as I could before they graduated and went their own ways.

During those two years, readers sent me many kind words of gratitude, many reports of errors, and many suggestions for additions. So in November of 2018, some weeks into my first semester as a math PhD student, I decided I should finish what I had started. Some months later, here is what I have.

Acknowledgements

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

I am indebted to countless people for this work. Here is a partial (surely incomplete) list.

Finally, a huge thanks to the math olympiad community, from which the Napkin (and me) has its roots. All the enthusiasm, encouragement, and thank-you notes I have received over the years led me to begin writing this in the first place. I otherwise would never have the arrogance to dream a project like this was at all possible. And of course I would be nowhere near where I am today were it not for the life-changing journey I took in chasing my dreams to the IMO. Forever TWN2!

Advice for the reader

1  Prerequisites

As explained in the preface, the main prerequisite is some amount of mathematical maturity. This means I expect the reader to know how to read and write a proof, follow logical arguments, and so on.

I also assume the reader is familiar with basic terminology about sets and functions (e.g. “what is a bijection?”). If not, one should consult ?? .

2  Deciding what to read

There is no need to read this book in linear order: it covers all sorts of areas in mathematics, and there are many paths you can take. In ?? , I give a short overview for each part explaining what you might expect to see in that part.

For now, here is a brief chart showing how the chapters depend on each other; again see ??  for details. Dependencies are indicated by arrows; dotted lines are optional dependencies. I suggest that you simply pick a chapter you find interesting, and then find the shortest path. With that in mind, I hope the length of the entire PDF is not intimidating.

SVG-Viewer needed.

3  Questions, exercises, and problems

In this book, there are three hierarchies:

Several hints and solutions can be found in ?? .

PIC
Image from [?]

4  Paper

At the risk of being blunt,

Read this book with pencil and paper.

Here’s why:

PIC
Image from [?]

You are not God. You cannot keep everything in your head.2 If you’ve printed out a hard copy, then write in the margins. If you’re trying to save paper, grab a notebook or something along with the ride. Somehow, some way, make sure you can write. Thanks.

5  On the importance of examples

I am pathologically obsessed with examples. In this book, I place all examples in large boxes to draw emphasis to them, which leads to some pages of the book simply consisting of sequences of boxes one after another. I hope the reader doesn’t mind.

I also often highlight a “prototypical example” for some sections, and reserve the color red for such a note. The philosophy is that any time the reader sees a definition or a theorem about such an object, they should test it against the prototypical example. If the example is a good prototype, it should be immediately clear why this definition is intuitive, or why the theorem should be true, or why the theorem is interesting, et cetera.

Let me tell you a secret. Whenever I wrote a definition or a theorem in this book, I would have to recall the exact statement from my (quite poor) memory. So instead I often consider the prototypical example, and then only after that do I remember what the definition or the theorem is. Incidentally, this is also how I learned all the definitions in the first place. I hope you’ll find it useful as well.

6  Conventions and notations

This part describes some of the less familiar notations and definitions and settles for once and for all some annoying issues (“is zero a natural number?”). Most of these are “remarks for experts”: if something doesn’t make sense, you probably don’t have to worry about it for now.

A full glossary of notation used can be found in the appendix.

6.i  Natural numbers are positive

The set is the set of positive integers, not including 0. In the set theory chapters, we use ω = {0,1,} instead, for consistency with the rest of the book.

6.ii  Sets and equivalence relations

This is brief, intended as a reminder for experts. Consult ??  for full details.

An equivalence relation on a set X is a relation which is symmetric, reflexive, and transitive. An equivalence relation partitions X into several equivalence classes. We will denote this by X∕. An element of such an equivalence class is a representative of that equivalence class.

I always use ∼
= for an “isomorphism”-style relation (formally: a relation which is an isomorphism in a reasonable category). The only time is used in the Napkin is for homotopic paths.

I generally use and since these are non-ambiguous, unlike . I only use on rare occasions in which equality obviously does not hold yet pointing it out would be distracting. For example, I write since “ℚ ⊊ ℝ” is distracting.

I prefer S T to S T.

The power set of S (i.e., the set of subsets of S), is denoted either by 2S or 𝒫(S).

6.iii  Functions

This is brief, intended as a reminder for experts. Consult ??  for full details.

Let Xf−→Y be a function:

6.iv  Cycle notation for permutations

Additionally, a permutation on a finite set may be denoted in cycle notation, as described in say https://en.wikipedia.org/wiki/Permutation#Cycle_notation. For example the notation (1 2 3 4)(5 6 7) refers to the permutation with 1↦→2, 2↦→3, 3↦→4, 4↦→1, 5↦→6, 6↦→7, 7↦→5. Usage of this notation will usually be obvious from context.

6.v  Rings

All rings have a multiplicative identity 1 unless otherwise specified. We allow 0 = 1 in general rings but not in integral domains.

All rings are commutative unless otherwise specified. There is an elaborate scheme for naming rings which are not commutative, used only in the chapter on cohomology rings:




Graded Not Graded



1 not required graded pseudo-ring pseudo-ring
Anticommutative, 1 not requiredanticommutative pseudo-ring N/A
Has 1 graded ring N/A
Anticommutative with 1 anticommutative ring N/A
Commutative with 1 commutative graded ring ring



On the other hand, an algebra always has 1, but it need not be commutative.

6.vi  Choice

We accept the Axiom of Choice, and use it freely.

7  Further reading

The appendix ??  contains a list of resources I like, and explanations of pedagogical choices that I made for each chapter. I encourage you to check it out.

In particular, this is where you should go for further reading! There are some topics that should be covered in the Napkin, but are not, due to my own ignorance or laziness. The references provided in this appendix should hopefully help partially atone for my omissions.

Contents

  Preface
  Advice for the reader
 1  Prerequisites
 2  Deciding what to read
 3  Questions, exercises, and problems
 4  Paper
 5  On the importance of examples
 6  Conventions and notations
 7  Further reading
I  Starting Out
0  Sales pitches
 0.1  The basics
 0.2  Abstract algebra
 0.3  Real and complex analysis
 0.4  Algebraic number theory
 0.5  Algebraic topology
 0.6  Algebraic geometry
 0.7  Set theory
1  Groups
 1.1  Definition and examples of groups
 1.2  Properties of groups
 1.3  Isomorphisms
 1.4  Orders of groups, and Lagrange’s theorem
 1.5  Subgroups
 1.6  Groups of small orders
 1.7  Unimportant long digression
 1.8  A few harder problems to think about
2  Metric spaces
 2.1  Definition and examples of metric spaces
 2.2  Convergence in metric spaces
 2.3  Continuous maps
 2.4  Homeomorphisms
 2.5  Extended example/definition: product metric
 2.6  Open sets
 2.7  Closed sets
 2.8  A few harder problems to think about
II  Basic Abstract Algebra
3  Homomorphisms and quotient groups
 3.1  Generators and group presentations
 3.2  Homomorphisms
 3.3  Cosets and modding out
 3.4  (Optional) Proof of Lagrange’s theorem
 3.5  Eliminating the homomorphism
 3.6  (Digression) The first isomorphism theorem
 3.7  A few harder problems to think about
4  Rings and ideals
 4.1  Some motivational metaphors about rings vs groups
 4.2  (Optional) Pedagogical notes on motivation
 4.3  Definition and examples of rings
 4.4  Fields
 4.5  Homomorphisms
 4.6  Ideals
 4.7  Generating ideals
 4.8  Principal ideal domains
 4.9  Noetherian rings
 4.10  A few harder problems to think about
5  Flavors of rings
 5.1  Fields
 5.2  Integral domains
 5.3  Prime ideals
 5.4  Maximal ideals
 5.5  Field of fractions
 5.6  Unique factorization domains (UFD’s)
 5.7  A few harder problems to think about
III  Basic Topology
6  Properties of metric spaces
 6.1  Boundedness
 6.2  Completeness
 6.3  Let the buyer beware
 6.4  Subspaces, and (inb4) a confusing linguistic point
 6.5  A few harder problems to think about
7  Topological spaces
 7.1  Forgetting the metric
 7.2  Re-definitions
 7.3  Hausdorff spaces
 7.4  Subspaces
 7.5  Connected spaces
 7.6  Path-connected spaces
 7.7  Homotopy and simply connected spaces
 7.8  Bases of spaces
 7.9  A few harder problems to think about
8  Compactness
 8.1  Definition of sequential compactness
 8.2  Criteria for compactness
 8.3  Compactness using open covers
 8.4  Applications of compactness
 8.5  (Optional) Equivalence of formulations of compactness
 8.6  A few harder problems to think about
IV  Linear Algebra
9  Vector spaces
 9.1  The definitions of a ring and field
 9.2  Modules and vector spaces
 9.3  Direct sums
 9.4  Linear independence, spans, and basis
 9.5  Linear maps
 9.6  What is a matrix?
 9.7  Subspaces and picking convenient bases
 9.8  A cute application: Lagrange interpolation
 9.9  (Digression) Arrays of numbers are evil
 9.10  A word on general modules
 9.11  A few harder problems to think about
10  Eigen-things
 10.1  Why you should care
 10.2  Warning on assumptions
 10.3  Eigenvectors and eigenvalues
 10.4  The Jordan form
 10.5  Nilpotent maps
 10.6  Reducing to the nilpotent case
 10.7  (Optional) Proof of nilpotent Jordan
 10.8  Algebraic and geometric multiplicity
 10.9  A few harder problems to think about
11  Dual space and trace
 11.1  Tensor product
 11.2  Dual space
 11.3  V W gives matrices from V to W
 11.4  The trace
 11.5  A few harder problems to think about
12  Determinant
 12.1  Wedge product
 12.2  The determinant
 12.3  Characteristic polynomials, and Cayley-Hamilton
 12.4  A few harder problems to think about
13  Inner product spaces
 13.1  The inner product
 13.2  Norms
 13.3  Orthogonality
 13.4  Hilbert spaces
 13.5  A few harder problems to think about
14  Bonus: Fourier analysis
 14.1  Synopsis
 14.2  A reminder on Hilbert spaces
 14.3  Common examples
 14.4  Summary, and another teaser
 14.5  Parseval and friends
 14.6  Application: Basel problem
 14.7  Application: Arrow’s Impossibility Theorem
 14.8  A few harder problems to think about
15  Duals, adjoint, and transposes
 15.1  Dual of a map
 15.2  Identifying with the dual space
 15.3  The adjoint (conjugate transpose)
 15.4  Eigenvalues of normal maps
 15.5  A few harder problems to think about
V  More on Groups
16  Group actions overkill AIME problems
 16.1  Definition of a group action
 16.2  Stabilizers and orbits
 16.3  Burnside’s lemma
 16.4  Conjugation of elements
 16.5  A few harder problems to think about
17  Find all groups
 17.1  Sylow theorems
 17.2  (Optional) Proving Sylow’s theorem
 17.3  (Optional) Simple groups and Jordan-Hölder
 17.4  A few harder problems to think about
18  The PID structure theorem
 18.1  Finitely generated abelian groups
 18.2  Some ring theory prerequisites
 18.3  The structure theorem
 18.4  Reduction to maps of free R-modules
 18.5  Smith normal form
 18.6  A few harder problems to think about
VI  Representation Theory
19  Representations of algebras
 19.1  Algebras
 19.2  Representations
 19.3  Direct sums
 19.4  Irreducible and indecomposable representations
 19.5  Morphisms of representations
 19.6  The representations of Matd(k)
 19.7  A few harder problems to think about
20  Semisimple algebras
 20.1  Schur’s lemma continued
 20.2  Density theorem
 20.3  Semisimple algebras
 20.4  Maschke’s theorem
 20.5  Example: the representations of [S3]
 20.6  A few harder problems to think about
21  Characters
 21.1  Definitions
 21.2  The dual space modulo the commutator
 21.3  Orthogonality of characters
 21.4  Examples of character tables
 21.5  A few harder problems to think about
22  Some applications
 22.1  Frobenius divisibility
 22.2  Burnside’s theorem
 22.3  Frobenius determinant
VII  Quantum Algorithms
23  Quantum states and measurements
 23.1  Bra-ket notation
 23.2  The state space
 23.3  Observations
 23.4  Entanglement
 23.5  A few harder problems to think about
24  Quantum circuits
 24.1  Classical logic gates
 24.2  Reversible classical logic
 24.3  Quantum logic gates
 24.4  Deutsch-Jozsa algorithm
 24.5  A few harder problems to think about
25  Shor’s algorithm
 25.1  The classical (inverse) Fourier transform
 25.2  The quantum Fourier transform
 25.3  Shor’s algorithm
VIII  Calculus 101
26  Limits and series
 26.1  Completeness and inf/sup
 26.2  Proofs of the two key completeness properties of
 26.3  Monotonic sequences
 26.4  Infinite series
 26.5  Series addition is not commutative: a horror story
 26.6  Limits of functions at points
 26.7  Limits of functions at infinity
 26.8  A few harder problems to think about
27  Bonus: A hint of p-adic numbers
 27.1  Motivation
 27.2  Algebraic perspective
 27.3  Analytic perspective
 27.4  Mahler coefficients
 27.5  A few harder problems to think about
28  Differentiation
 28.1  Definition
 28.2  How to compute them
 28.3  Local (and global) maximums
 28.4  Rolle and friends
 28.5  Smooth functions
 28.6  A few harder problems to think about
29  Power series and Taylor series
 29.1  Motivation
 29.2  Power series
 29.3  Differentiating them
 29.4  Analytic functions
 29.5  A definition of Euler’s constant and exponentiation
 29.6  This all works over complex numbers as well, except also complex analysis is heaven
 29.7  A few harder problems to think about
30  Riemann integrals
 30.1  Uniform continuity
 30.2  Dense sets and extension
 30.3  Defining the Riemann integral
 30.4  Meshes
 30.5  A few harder problems to think about
IX  Complex Analysis
31  Holomorphic functions
 31.1  The nicest functions on earth
 31.2  Complex differentiation
 31.3  Contour integrals
 31.4  Cauchy-Goursat theorem
 31.5  Cauchy’s integral theorem
 31.6  Holomorphic functions are analytic
 31.7  A few harder problems to think about
32  Meromorphic functions
 32.1  The second nicest functions on earth
 32.2  Meromorphic functions
 32.3  Winding numbers and the residue theorem
 32.4  Argument principle
 32.5  Philosophy: why are holomorphic functions so nice?
 32.6  A few harder problems to think about
33  Holomorphic square roots and logarithms
 33.1  Motivation: square root of a complex number
 33.2  Square roots of holomorphic functions
 33.3  Covering projections
 33.4  Complex logarithms
 33.5  Some special cases
 33.6  A few harder problems to think about
X  Measure Theory
34  Measure spaces
 34.1  Motivating measure spaces via random variables
 34.2  Motivating measure spaces geometrically
 34.3  σ-algebras and measurable spaces
 34.4  Measure spaces
 34.5  A hint of Banach-Tarski
 34.6  Measurable functions
 34.7  On the word “almost” (TO DO)
 34.8  A few harder problems to think about
35  Constructing the Borel and Lebesgue measure
 35.1  Pre-measures
 35.2  Outer measures
 35.3  Carathéodory extension for outer measures
 35.4  Defining the Lebesgue measure
 35.5  A fourth row: Carathéodory for pre-measures
 35.6  From now on, we assume the Borel measure
 35.7  A few harder problems to think about
36  Lebesgue integration
 36.1  The definition
 36.2  Relation to Riemann integrals (or: actually computing Lebesgue integrals)
 36.3  A few harder problems to think about
37  Swapping order with Lebesgue integrals
 37.1  Motivating limit interchange
 37.2  Overview
 37.3  Fatou’s lemma
 37.4  Everything else
 37.5  Fubini and Tonelli
 37.6  A few harder problems to think about
38  Bonus: A hint of Pontryagin duality
 38.1  LCA groups
 38.2  The Pontryagin dual
 38.3  The orthonormal basis in the compact case
 38.4  The Fourier transform of the non-compact case
 38.5  Summary
 38.6  A few harder problems to think about
XI  Probability (TO DO)
39  Random variables (TO DO)
 39.1  Random variables
 39.2  Distribution functions
 39.3  Examples of random variables
 39.4  Characteristic functions
 39.5  Independent random variables
 39.6  A few harder problems to think about
40  Large number laws (TO DO)
 40.1  Notions of convergence
 40.2  A few harder problems to think about
41  Stopped martingales (TO DO)
 41.1  How to make money almost surely
 41.2  Sub-σ-algebras and filtrations
 41.3  Conditional expectation
 41.4  Supermartingales
 41.5  Optional stopping
 41.6  Fun applications of optional stopping (TO DO)
 41.7  A few harder problems to think about
XII  Differential Geometry
42  Multivariable calculus done correctly
 42.1  The total derivative
 42.2  The projection principle
 42.3  Total and partial derivatives
 42.4  (Optional) A word on higher derivatives
 42.5  Towards differential forms
 42.6  A few harder problems to think about
43  Differential forms
 43.1  Pictures of differential forms
 43.2  Pictures of exterior derivatives
 43.3  Differential forms
 43.4  Exterior derivatives
 43.5  Closed and exact forms
 43.6  A few harder problems to think about
44  Integrating differential forms
 44.1  Motivation: line integrals
 44.2  Pullbacks
 44.3  Cells
 44.4  Boundaries
 44.5  Stokes’ theorem
 44.6  A few harder problems to think about
45  A bit of manifolds
 45.1  Topological manifolds
 45.2  Smooth manifolds
 45.3  Regular value theorem
 45.4  Differential forms on manifolds
 45.5  Orientations
 45.6  Stokes’ theorem for manifolds
 45.7  (Optional) The tangent and contangent space
 45.8  A few harder problems to think about
XIII  Algebraic NT I: Rings of Integers
46  Algebraic integers
 46.1  Motivation from high school algebra
 46.2  Algebraic numbers and algebraic integers
 46.3  Number fields
 46.4  Primitive element theorem, and monogenic extensions
 46.5  A few harder problems to think about
47  The ring of integers
 47.1  Norms and traces
 47.2  The ring of integers
 47.3  On monogenic extensions
 47.4  A few harder problems to think about
48  Unique factorization (finally!)
 48.1  Motivation
 48.2  Ideal arithmetic
 48.3  Dedekind domains
 48.4  Unique factorization works
 48.5  The factoring algorithm
 48.6  Fractional ideals
 48.7  The ideal norm
 48.8  A few harder problems to think about
49  Minkowski bound and class groups
 49.1  The class group
 49.2  The discriminant of a number field
 49.3  The signature of a number field
 49.4  Minkowski’s theorem
 49.5  The trap box
 49.6  The Minkowski bound
 49.7  The class group is finite
 49.8  Computation of class numbers
 49.9  A few harder problems to think about
50  More properties of the discriminant
 50.1  A few harder problems to think about
51  Bonus: Let’s solve Pell’s equation!
 51.1  Units
 51.2  Dirichlet’s unit theorem
 51.3  Finding fundamental units
 51.4  Pell’s equation
 51.5  A few harder problems to think about
XIV  Algebraic NT II: Galois and Ramification Theory
52  Things Galois
 52.1  Motivation
 52.2  Field extensions, algebraic closures, and splitting fields
 52.3  Embeddings into algebraic closures for number fields
 52.4  Everyone hates characteristic 2: separable vs irreducible
 52.5  Automorphism groups and Galois extensions
 52.6  Fundamental theorem of Galois theory
 52.7  A few harder problems to think about
 52.8  (Optional) Proof that Galois extensions are splitting
53  Finite fields
 53.1  Example of a finite field
 53.2  Finite fields have prime power order
 53.3  All finite fields are isomorphic
 53.4  The Galois theory of finite fields
 53.5  A few harder problems to think about
54  Ramification theory
 54.1  Ramified / inert / split primes
 54.2  Primes ramify if and only if they divide ΔK
 54.3  Inertial degrees
 54.4  The magic of Galois extensions
 54.5  (Optional) Decomposition and inertia groups
 54.6  Tangential remark: more general Galois extensions
 54.7  A few harder problems to think about
55  The Frobenius element
 55.1  Frobenius elements
 55.2  Conjugacy classes
 55.3  Chebotarev density theorem
 55.4  Example: Frobenius elements of cyclotomic fields
 55.5  Frobenius elements behave well with restriction
 55.6  Application: Quadratic reciprocity
 55.7  Frobenius elements control factorization
 55.8  Example application: IMO 2003 problem 6
 55.9  A few harder problems to think about
56  Bonus: A Bit on Artin Reciprocity
 56.1  Infinite primes
 56.2  Modular arithmetic with infinite primes
 56.3  Infinite primes in extensions
 56.4  Frobenius element and Artin symbol
 56.5  Artin reciprocity
 56.6  A few harder problems to think about
XV  Algebraic Topology I: Homotopy
57  Some topological constructions
 57.1  Spheres
 57.2  Quotient topology
 57.3  Product topology
 57.4  Disjoint union and wedge sum
 57.5  CW complexes
 57.6  The torus, Klein bottle, ℝℙn, ℂℙn
 57.7  A few harder problems to think about
58  Fundamental groups
 58.1  Fusing paths together
 58.2  Fundamental groups
 58.3  Fundamental groups are functorial
 58.4  Higher homotopy groups
 58.5  Homotopy equivalent spaces
 58.6  The pointed homotopy category
 58.7  A few harder problems to think about
59  Covering projections
 59.1  Even coverings and covering projections
 59.2  Lifting theorem
 59.3  Lifting correspondence
 59.4  Regular coverings
 59.5  The algebra of fundamental groups
 59.6  A few harder problems to think about
XVI  Category Theory
60  Objects and morphisms
 60.1  Motivation: isomorphisms
 60.2  Categories, and examples thereof
 60.3  Special objects in categories
 60.4  Binary products
 60.5  Monic and epic maps
 60.6  A few harder problems to think about
61  Functors and natural transformations
 61.1  Many examples of functors
 61.2  Covariant functors
 61.3  Contravariant functors
 61.4  Equivalence of categories
 61.5  (Optional) Natural transformations
 61.6  (Optional) The Yoneda lemma
 61.7  A few harder problems to think about
62  Limits in categories (TO DO)
 62.1  Equalizers
 62.2  Pullback squares (TO DO)
 62.3  Limits
 62.4  A few harder problems to think about
63  Abelian categories
 63.1  Zero objects, kernels, cokernels, and images
 63.2  Additive and abelian categories
 63.3  Exact sequences
 63.4  The Freyd-Mitchell embedding theorem
 63.5  Breaking long exact sequences
 63.6  A few harder problems to think about
XVII  Algebraic Topology II: Homology
64  Singular homology
 64.1  Simplices and boundaries
 64.2  The singular homology groups
 64.3  The homology functor and chain complexes
 64.4  More examples of chain complexes
 64.5  A few harder problems to think about
65  The long exact sequence
 65.1  Short exact sequences and four examples
 65.2  The long exact sequence of homology groups
 65.3  The Mayer-Vietoris sequence
 65.4  A few harder problems to think about
66  Excision and relative homology
 66.1  The long exact sequences
 66.2  The category of pairs
 66.3  Excision
 66.4  Some applications
 66.5  Invariance of dimension
 66.6  A few harder problems to think about
67  Bonus: Cellular homology
 67.1  Degrees
 67.2  Cellular chain complex
 67.3  The cellular boundary formula
 67.4  A few harder problems to think about
68  Singular cohomology
 68.1  Cochain complexes
 68.2  Cohomology of spaces
 68.3  Cohomology of spaces is functorial
 68.4  Universal coefficient theorem
 68.5  Example computation of cohomology groups
 68.6  Relative cohomology groups
 68.7  A few harder problems to think about
69  Application of cohomology
 69.1  Poincaré duality
 69.2  de Rham cohomology
 69.3  Graded rings
 69.4  Cup products
 69.5  Relative cohomology pseudo-rings
 69.6  Wedge sums
 69.7  Künneth formula
 69.8  A few harder problems to think about
XVIII  Algebraic Geometry I: Classical Varieties
70  Affine varieties
 70.1  Affine varieties
 70.2  Naming affine varieties via ideals
 70.3  Radical ideals and Hilbert’s Nullstellensatz
 70.4  Pictures of varieties in 𝔸1
 70.5  Prime ideals correspond to irreducible affine varieties
 70.6  Pictures in 𝔸2 and 𝔸3
 70.7  Maximal ideals
 70.8  Motivating schemes with non-radical ideals
 70.9  A few harder problems to think about
71  Affine varieties as ringed spaces
 71.1  Synopsis
 71.2  The Zariski topology on 𝔸n
 71.3  The Zariski topology on affine varieties
 71.4  Coordinate rings
 71.5  The sheaf of regular functions
 71.6  Regular functions on distinguished open sets
 71.7  Baby ringed spaces
 71.8  A few harder problems to think about
72  Projective varieties
 72.1  Graded rings
 72.2  The ambient space
 72.3  Homogeneous ideals
 72.4  As ringed spaces
 72.5  Examples of regular functions
 72.6  A few harder problems to think about
73  Bonus: Bézout’s theorem
 73.1  Non-radical ideals
 73.2  Hilbert functions of finitely many points
 73.3  Hilbert polynomials
 73.4  Bézout’s theorem
 73.5  Applications
 73.6  A few harder problems to think about
74  Morphisms of varieties
 74.1  Defining morphisms of baby ringed spaces
 74.2  Classifying the simplest examples
 74.3  Some more applications and examples
 74.4  The hyperbola effect
 74.5  A few harder problems to think about
XIX  Algebraic Geometry II: Affine Schemes
75  Sheaves and ringed spaces
 75.1  Motivation and warnings
 75.2  Pre-sheaves
 75.3  Stalks and germs
 75.4  Sheaves
 75.5  For sheaves, sections “are” sequences of germs
 75.6  Sheafification (optional)
 75.7  A few harder problems to think about
76  Localization
 76.1  Spoilers
 76.2  The definition
 76.3  Localization away from an element
 76.4  Localization at a prime ideal
 76.5  Prime ideals of localizations
 76.6  Prime ideals of quotients
 76.7  Localization commutes with quotients
 76.8  A few harder problems to think about
77  Affine schemes: the Zariski topology
 77.1  Some more advertising
 77.2  The set of points
 77.3  The Zariski topology on the spectrum
 77.4  On radicals
 77.5  A few harder problems to think about
78  Affine schemes: the sheaf
 78.1  A useless definition of the structure sheaf
 78.2  The value of distinguished open sets (or: how to actually compute sections)
 78.3  The stalks of the structure sheaf
 78.4  Local rings and residue fields: linking germs to values
 78.5  Recap
 78.6  Functions are determined by germs, not values
 78.7  A few harder problems to think about
79  Interlude: eighteen examples of affine schemes
 79.1  Example: Speck, a single point
 79.2  Spec [x], a one-dimensional line
 79.3  Spec [x], a one-dimensional line with complex conjugates glued (no fear nullstellensatz)
 79.4  Speck[x], over any ground field
 79.5  Spec , a one-dimensional scheme
 79.6  Speck[x](x2 x), two points
 79.7  Speck[x](x2), the double point
 79.8  Speck[x](x3 x2), a double point and a single point
 79.9  Spec 60, a scheme with three points
 79.10  Speck[x,y], the two-dimensional plane
 79.11  Spec [x], a two-dimensional scheme, and Mumford’s picture
 79.12  Speck[x,y](y x2), the parabola
 79.13  Spec [i], the Gaussian integers (one-dimensional)
 79.14  Long example: Speck[x,y](xy), two axes
 79.15  Speck[x,x1], the punctured line (or hyperbola)
 79.16  Speck[x](x), zooming in to the origin of the line
 79.17  Speck[x,y](x,y), zooming in to the origin of the plane
 79.18  Speck[x,y](0) = Speck(x,y), the stalk above the generic point
 79.19  A few harder problems to think about
80  Morphisms of locally ringed spaces
 80.1  Morphisms of ringed spaces via sections
 80.2  Morphisms of ringed spaces via stalks
 80.3  Morphisms of locally ringed spaces
 80.4  A few examples of morphisms between affine schemes
 80.5  The big theorem
 80.6  More examples of scheme morphisms
 80.7  A little bit on non-affine schemes
 80.8  Where to go from here
 80.9  A few harder problems to think about
XX  Algebraic Geometry III: Schemes (TO DO)
XXI  Set Theory I: ZFC, Ordinals, and Cardinals
81  Interlude: Cauchy’s functional equation and Zorn’s lemma
 81.1  Let’s construct a monster
 81.2  Review of finite induction
 81.3  Transfinite induction
 81.4  Wrapping up functional equations
 81.5  Zorn’s lemma
 81.6  A few harder problems to think about
82  Zermelo-Fraenkel with choice
 82.1  The ultimate functional equation
 82.2  Cantor’s paradox
 82.3  The language of set theory
 82.4  The axioms of ZFC
 82.5  Encoding
 82.6  Choice and well-ordering
 82.7  Sets vs classes
 82.8  A few harder problems to think about
83  Ordinals
 83.1  Counting for preschoolers
 83.2  Counting for set theorists
 83.3  Definition of an ordinal
 83.4  Ordinals are “tall”
 83.5  Transfinite induction and recursion
 83.6  Ordinal arithmetic
 83.7  The hierarchy of sets
 83.8  A few harder problems to think about
84  Cardinals
 84.1  Equinumerous sets and cardinals
 84.2  Cardinalities
 84.3  Aleph numbers
 84.4  Cardinal arithmetic
 84.5  Cardinal exponentiation
 84.6  Cofinality
 84.7  Inaccessible cardinals
 84.8  A few harder problems to think about
XXII  Set Theory II: Model Theory and Forcing
85  Inner model theory
 85.1  Models
 85.2  Sentences and satisfaction
 85.3  The Levy hierarchy
 85.4  Substructures, and Tarski-Vaught
 85.5  Obtaining the axioms of ZFC
 85.6  Mostowski collapse
 85.7  Adding an inaccessible
 85.8  FAQ’s on countable models
 85.9  Picturing inner models
 85.10  A few harder problems to think about
86  Forcing
 86.1  Setting up posets
 86.2  More properties of posets
 86.3  Names, and the generic extension
 86.4  Fundamental theorem of forcing
 86.5  (Optional) Defining the relation
 86.6  The remaining axioms
 86.7  A few harder problems to think about
87  Breaking the continuum hypothesis
 87.1  Adding in reals
 87.2  The countable chain condition
 87.3  Preserving cardinals
 87.4  Infinite combinatorics
 87.5  A few harder problems to think about
XXIII  Appendix
A  Pedagogical comments and references
 A.1  Basic algebra and topology
 A.2  Second-year topics
 A.3  Advanced topics
 A.4  Topics not in Napkin
B  Hints to selected problems
C  Sketches of selected solutions
D  Glossary of notations
 D.1  General
 D.2  Functions and sets
 D.3  Abstract and linear algebra
 D.4  Quantum computation
 D.5  Topology and real/complex analysis
 D.6  Measure theory and probability
 D.7  Algebraic topology
 D.8  Category theory
 D.9  Differential geometry
 D.10  Algebraic number theory
 D.11  Representation theory
 D.12  Algebraic geometry
 D.13  Set theory
E  Terminology on sets and functions
 E.1  Sets
 E.2  Functions
 E.3  Equivalence relations

Part I
Starting Out

0  Sales pitches

This chapter contains a pitch for each part, to help you decide what you want to read and to elaborate more on how they are interconnected.

For convenience, here is again the dependency plot that appeared in the frontmatter.

SVG-Viewer needed.

0.1  The basics

0.2  Abstract algebra

0.3  Real and complex analysis

0.4  Algebraic number theory

0.5  Algebraic topology

0.6  Algebraic geometry

0.7  Set theory

1  Groups

A group is one of the most basic structures in higher mathematics. In this chapter I will tell you only the bare minimum: what a group is, and when two groups are the same.

1.1  Definition and examples of groups

Prototypical example for this section: The additive group of integers (,+) and the cyclic group ∕m. Just don’t let yourself forget that most groups are non-commutative.

A group consists of two pieces of data: a set G, and an associative binary operation with some properties. Before I write down the definition of a group, let me give two examples.

Example 1.1.1 (Additive integers)
The pair (,+) is a group: = {...,− 2,− 1,0,1,2,...} is the set and the associative operation is addition. Note that

We call this group .

Example 1.1.2 (Nonzero rationals)
Let × be the set of nonzero rational numbers. The pair (×,) is a group: the set is × and the associative operation is multiplication.

Again we see the same two nice properties.

From this you might already have a guess what the definition of a group is.

Definition 1.1.3. A group is a pair G = (G,⋆) consisting of a set of elements G, and a binary operation on G, such that:

Remark 1.1.4 (Unimportant pedantic point) Some authors like to add a “closure” axiom, i.e. to say explicitly that g ⋆ h G. This is implied already by the fact that is a binary operation on G, but is worth keeping in mind for the examples below.

Remark 1.1.5 — It is not required that is commutative (a⋆b = b⋆a). So we say that a group is abelian if the operation is commutative and non-abelian otherwise.

Example 1.1.6 (Non-Examples of groups)

Let’s resume writing down examples. Here are some more abelian examples of groups:

Example 1.1.7 (Complex unit circle)
Let S1 denote the set of complex numbers z with absolute value one; that is

 1
S  := {z ∈ ℂ | |z| = 1} .

Then (S1,×) is a group because

There is one thing I ought to also check: that z1 × z2 is actually still in S1. But this follows from the fact that |z1z2| = |z1||z2| = 1.

Example 1.1.8 (Addition mod n)
Here is an example from number theory: Let n > 1 be an integer, and consider the residues (remainders) modulo n. These form a group under addition. We call this the cyclic group of order n, and denote it as ∕n, with elements 0,1,. The identity is 0.

Example 1.1.9 (Multiplication mod p)
Let p be a prime. Consider the nonzero residues modulo p, which we denote by (∕p)×. Then ((ℤ∕pℤ )×,× ) is a group.

Question 1.1.10. Why do we need the fact that p is prime?

(Digression: the notation ∕nand (∕p)× may seem strange but will make sense when we talk about rings and ideals. Set aside your worry for now.)

Here are some non-abelian examples:

Example 1.1.11 (General linear group)
Let n be a positive integer. Then GLn() is defined as the set of n × n real matrices which have nonzero determinant. It turns out that with this condition, every matrix does indeed have an inverse, so (GLn(),×) is a group, called the general linear group.

(The fact that GLn() is closed under × follows from the linear algebra fact that det(AB) = detAdetB, proved in later chapters.)

Example 1.1.12 (Special linear group)
Following the example above, let SLn() denote the set of n × n matrices whose determinant is actually 1. Again, for linear algebra reasons it turns out that (SLn(),×) is also a group, called the special linear group.

Example 1.1.13 (Symmetric groups)
Let Sn be the set of permutations of {1,...,n }. By viewing these permutations as functions from {1,...,n} to itself, we can consider compositions of permutations. Then the pair (Sn,) (here is function composition) is also a group, because

The group Sn is called the symmetric group on n elements.

Example 1.1.14 (Dihedral group)
The dihedral group of order 2n, denoted D2n, is the group of symmetries of a regular n-gon A1A2An, which includes rotations and reflections. It consists of the 2n elements

{     2     n−1        2      n− 1}
 1,r,r ,...,r   ,s,sr,sr ,...,sr     .

The element r corresponds to rotating the n-gon by 2π
n-, while s corresponds to reflecting it across the line OA1 (here O is the center of the polygon). So rs mean “reflect then rotate” (like with function composition, we read from right to left).

In particular, rn = s2 = 1. You can also see that rks = srk.

Here is a picture of some elements of D10.

Trivia: the dihedral group D12 is my favorite example of a non-abelian group, and is the first group I try for any exam question of the form “find an example…”.

More examples:

Example 1.1.15 (Products of groups)
Let (G,⋆) and (H,) be groups. We can define a product group (G × H,), as follows. The elements of the group will be ordered pairs (g,h) G × H. Then

(g1,h1)⋅(g2,h2) = (g1 ⋆g2,h1 ∗h2) ∈ G × H

is the group operation.

Question 1.1.16. What are the identity and inverses of the product group?

Example 1.1.17 (Trivial group)
The trivial group, often denoted 0 or 1, is the group with only an identity element. I will use the notation {1}.

Exercise 1.1.18. Which of these are groups?

(a)
Rational numbers with odd denominators (in simplest form), where the operation is addition. (This includes integers, written as n∕1, and 0 = 01).
(b)
The set of rational numbers with denominator at most 2, where the operation is addition.
(c)
The set of rational numbers with denominator at most 2, where the operation is multiplication.
(d)
The set of nonnegative integers, where the operation is addition.

1.2  Properties of groups

Prototypical example for this section: (∕p)× is possibly best.

Abuse of Notation 1.2.1. From now on, we’ll often refer to a group (G,⋆) by just G. Moreover, we’ll abbreviate a ⋆ b to just ab. Also, because the operation is associative, we will omit unnecessary parentheses: (ab)c = a(bc) = abc.

Abuse of Notation 1.2.2. From now on, for any g G and n we abbreviate

  n
g  = g◟-⋆⋅⋅◝⋅◜⋆-g◞.
       n times

Moreover, we let g1 denote the inverse of g, and gn = (g1)n.

In mathematics, a common theme is to require that objects satisfy certain minimalistic properties, with certain examples in mind, but then ignore the examples on paper and try to deduce as much as you can just from the properties alone. (Math olympiad veterans are likely familiar with “functional equations” in which knowing a single property about a function is enough to determine the entire function.) Let’s try to do this here, and see what we can conclude just from knowing ?? .

It is a law in Guam and 37 other states that I now state the following proposition.

Fact 1.2.3. Let G be a group.

(a)
The identity of a group is unique.
(b)
The inverse of any element is unique.
(c)
For any g G, (g1)1 = g.

Proof. This is mostly just some formal manipulations, and you needn’t feel bad skipping it on a first read.

(a)
If 1 and 1are identities, then 1 = 1 1= 1.
(b)
If h and hare inverses to g, then 1G = g ⋆ h =⇒ h= (h⋆ g) ⋆ h = 1G ⋆ h = h.
(c)
Trivial; omitted. □

Now we state a slightly more useful proposition.

Proposition 1.2.4 (Inverse of products)
Let G be a group, and a,b G. Then (ab)1 = b1a1.

Proof. Direct computation. We have

(ab)(b− 1a−1) = a (bb−1)a−1 = aa− 1 = 1G.

Hence (ab)1 = b1a1. Similarly, (b1a1)(ab) = 1G as well. □

Finally, we state a very important lemma about groups, which highlights why having an inverse is so valuable.

Lemma 1.2.5 (Left multiplication is a bijection)
Let G be a group, and pick a g G. Then the map G G given by x↦→gx is a bijection.

Exercise 1.2.6. Check this by showing injectivity and surjectivity directly. (If you don’t know what these words mean, consult ?? .)

Example 1.2.7
Let G = (7)× (as in ?? ) and pick g = 3. The above lemma states that the map x↦→3 x is a bijection, and we can see this explicitly:

1 ×3
↦−→ 3 (mod 7)
2 ×3
↦−→ 6 (mod 7)
3 ×3
↦−→ 2 (mod 7)
4 ×3
↦−→ 5 (mod 7)
5↦−×→3 1 (mod 7)
6 ×3
↦−→ 4 (mod 7).

The fact that the map is injective is often called the cancellation law. (Why do you think so?)

Abuse of Notation 1.2.8 (Later on, sometimes the identity is denoted 0 instead of 1). You don’t need to worry about this for a few chapters, but I’ll bring it up now anyways. In most of our examples up until now the operation was thought of like multiplication of some sort, which is why 1 = 1G was a natural notation for the identity element.

But there are groups like = (,+) where the operation is thought of as addition, in which case the notation 0 = 0G might make more sense instead. (In general, whenever an operation is denoted +, the operation is almost certainly commutative.) We will eventually start doing so too when we discuss rings and linear algebra.

1.3  Isomorphisms

Prototypical example for this section: ∼=10.

First, let me talk about what it means for groups to be isomorphic. Consider the two groups

These groups are “different”, but only superficially so – you might even say they only differ in the names of the elements. Think about what this might mean formally for a moment.

Specifically the map

ϕ : ℤ → 10ℤ by x ↦→  10x

is a bijection of the underlying sets which respects the group action. In symbols,

ϕ(x + y) = ϕ (x )+ ϕ(y).

In other words, ϕ is a way of re-assigning names of the elements without changing the structure of the group. That’s all just formalism for capturing the obvious fact that (,+) and (10,+) are the same thing.

Now, let’s do the general definition.

Definition 1.3.1. Let G = (G,⋆) and H = (H,) be groups. A bijection ϕ : G H is called an isomorphism if

ϕ (g  ⋆g ) = ϕ(g )∗ ϕ(g)  for all g ,g ∈ G.
   1   2       1      2         1  2

If there exists an isomorphism from G to H, then we say G and H are isomorphic and write G∼
=H.

Note that in this definition, the left-hand side ϕ(g1 ⋆ g2) uses the operation of G while the right-hand side ϕ(g1) ϕ(g2) uses the operation of H.

Example 1.3.2 (Examples of isomorphisms)
Let G and H be groups. We have the following isomorphisms.

(a)
∼=10, as above.
(b)
There is an isomorphism
G × H  ∼= H ×  G

by the map (g,h)↦→(h,g).

(c)
The identity map id : G G is an isomorphism, hence G∼=G.
(d)
There is another isomorphism of to itself: send every x to x.

Example 1.3.3 (Primitive roots modulo 7)
As a nontrivial example, we claim that 6∼=(7)×. The bijection is

              a
ϕ(a mod 6) = 3 mod  7.

To check that this is an isomorphism, we need to verify several things.

Example 1.3.4 (Primitive roots)
More generally, for any prime p, there exists an element g (∕p)× called a primitive root modulo p such that 1,g,g2,,gp2 are all different modulo p. One can show by copying the above proof that

ℤ∕(p−  1)ℤ ∼=  (ℤ ∕pℤ)× for all primes p.

The example above was the special case p = 7 and g = 3.

Exercise 1.3.5. Assuming the existence of primitive roots, establish the isomorphism (p 1)∼=(∕p)× as above.

It’s not hard to see that ∼= is an equivalence relation (why?). Moreover, because we really only care about the structure of groups, we’ll usually consider two groups to be the same when they are isomorphic. So phrases such as “find all groups” really mean “find all groups up to isomorphism”.

1.4  Orders of groups, and Lagrange’s theorem

Prototypical example for this section: (∕p)×.

As is typical in math, we use the word “order” for way too many things. In groups, there are two notions of order.

Definition 1.4.1. The order of a group G is the number of elements of G. We denote this by |G |. Note that the order may not be finite, as in . We say G is a finite group just to mean that |G | is finite.

Example 1.4.2 (Orders of groups)
For a prime p,        ×
|(ℤ∕pℤ ) | = p 1. In other words, the order of (∕p)× is p1. As another example, the order of the symmetric group Sn is |Sn | = n! and the order of the dihedral group D2n is 2n.

Definition 1.4.3. The order of an element g G is the smallest positive integer n such that gn = 1G, or if no such n exists. We denote this by ordg.

Example 1.4.4 (Examples of orders)
The order of 1 in × is 2, while the order of 1 in is infinite.

Question 1.4.5. Find the order of each of the six elements of 6, the cyclic group on six elements. (See ??  if you’ve forgotten what 6means.)

Example 1.4.6 (Primitive roots)
If you know olympiad number theory, this coincides with the definition of an order of a residue mod p. That’s why we use the term “order” there as well. In particular, a primitive root is precisely an element g (∕p)× such that ordg = p 1.

You might also know that if xn 1 (mod p), then the order of x (mod p) must divide n. The same is true in a general group for exactly the same reason.

Fact 1.4.7. If gn = 1G then ordg divides n.

Also, you can show that any element of a finite group has a finite order. The proof is just an olympiad-style pigeonhole argument. Consider the infinite sequence 1G,g,g2,, and find two elements that are the same.

Fact 1.4.8. Let G be a finite group. For any g G, ordg is finite.

What’s the last property of (∕p)× that you know from olympiad math? We have Fermat’s little theorem: for any a (∕p)×, we have ap1 1 (mod p). This is no coincidence: exactly the same thing is true in a more general setting.

Theorem 1.4.9 (Lagrange’s theorem for orders)
Let G be any finite group. Then x|G| = 1G for any x G.

Keep this result in mind! We’ll prove it later in the generality of ?? .

1.5  Subgroups

Prototypical example for this section: SLn() is a subgroup of GLn().

Earlier we saw that GLn(), the n×n matrices with nonzero determinant, formed a group under matrix multiplication. But we also saw that a subset of GLn(), namely SLn(), also formed a group with the same operation. For that reason we say that SLn() is a subgroup of GLn(). And this definition generalizes in exactly the way you expect.

Definition 1.5.1. Let G = (G,⋆) be a group. A subgroup of G is exactly what you would expect it to be: a group H = (H,⋆) where H is a subset of G. It’s a proper subgroup if HG.

Remark 1.5.2 — To specify a group G, I needed to tell you both what the set G was and the operation was. But to specify a subgroup H of a given group G, I only need to tell you who its elements are: the operation of H is just inherited from the operation of G.

Example 1.5.3 (Examples of subgroups)

(a)
2is a subgroup of , which is isomorphic to itself!
(b)
Consider again Sn, the symmetric group on n elements. Let T be the set of permutations τ : {1,,n}→{1,,n} for which τ(n) = n. Then T is a subgroup of Sn; in fact, it is isomorphic to Sn1.
(c)
Consider the group G × H (?? ) and the elements {(g,1H ) | g ∈ G }. This is a subgroup of G × H (why?). In fact, it is isomorphic to G by the isomorphism (g,1H)↦→g.

Example 1.5.4 (Stupid examples of subgroups)
For any group G, the trivial group {1G} and the entire group G are subgroups of G.

Next is an especially important example that we’ll talk about more in later chapters.

Example 1.5.5 (Subgroup generated by an element)
Let x be an element of a group G. Consider the set

      {                      }
⟨x⟩ =  ...,x−2,x−1,1,x,x2,...  .

This is also a subgroup of G, called the subgroup generated by x.

Exercise 1.5.6. If ordx = 2015, what is the above subgroup equal to? What if ordx = ?

Finally, we present some non-examples of subgroups.

Example 1.5.7 (Non-examples of subgroups)
Consider the group = (,+).

(a)
The set {0,1,2,...} is not a subgroup of because it does not contain inverses.
(b)
The set {n3n } = {,8,1,0,1,8,} is not a subgroup because it is not closed under addition; the sum of two cubes is not in general a cube.
(c)
The empty set is not a subgroup of because it lacks an identity element.

1.6  Groups of small orders

Just for fun, here is a list of all groups of order less than or equal to ten (up to isomorphism, of course).

1.
The only group of order 1 is the trivial group.
2.
The only group of order 2 is 2.
3.
The only group of order 3 is 3.
4.
The only groups of order 4 are
5.
The only group of order 5 is 5.
6.
The groups of order six are

Some of you might wonder where 2× 3is. All I have to say is: Chinese remainder theorem!

You might wonder where D6 is in this list. It’s actually isomorphic to S3.

7.
The only group of order 7 is 7.
8.
The groups of order eight are more numerous.
9.
The groups of order nine are
10.
The groups of order 10 are

1.7  Unimportant long digression

A common question is: why these axioms? For example, why associative but not commutative? This answer will likely not make sense until later, but here are some comments that may help.

One general heuristic is: Whenever you define a new type of general object, there’s always a balancing act going on. On the one hand, you want to include enough constraints that your objects are “nice”. On the other hand, if you include too many constraints, then your definition applies to too few objects.

So, for example, we include “associative” because that makes our lives easier and most operations we run into are associative. In particular, associativity is required for the inverse of an element to necessarily be unique. However we don’t include “commutative”, because examples below show that there are lots of non-abelian groups we care about. (But we introduce another name “abelian” because we still want to keep track of it.)

Another comment: a good motivation for the inverse axioms is that you get a large amount of symmetry. The set of positive integers with addition is not a group, for example, because you can’t subtract 6 from 3: some elements are “larger” than others. By requiring an inverse element to exist, you get rid of this issue. (You also need identity for this; it’s hard to define inverses without it.)

Even more abstruse comment: ??  shows that groups are actually shadows of the so-called symmetric groups (defined later, also called permutation groups). This makes rigorous the notion that “groups are very symmetric”.

1.8  A few harder problems to think about

Problem 1A. What is the joke in the following figure? (Source: [?].)

PIC

Problem 1B. Prove Lagrange’s theorem for orders in the special case that G is a finite abelian group.

Problem 1C. Show that D6∼=S3 but D24S4.

Problem 1D. Let p be a prime. Show that the only group of order p is ∕p.

Problem 1E (A hint for Cayley’s theorem). Find a subgroup H of S8 which is isomorphic to D8, and write the isomorphism explicitly.

Problem 1F.       PICLet G be a finite group.1 Show that there exists a positive integer n such that

(a)
(Cayley’s theorem) G is isomorphic to some subgroup of the symmetric group Sn.
(b)
(Representation Theory) G is isomorphic to some subgroup of the general linear group GLn(). (This is the group of invertible n × n matrices.)

Problem 1G (IMO SL 2005 C5).       PICThere are n markers, each with one side white and the other side black. In the beginning, these n markers are aligned in a row so that their white sides are all up. In each step, if possible, we choose a marker whose white side is up (but not one of the outermost markers), remove it, and reverse the closest marker to the left of it and also reverse the closest marker to the right of it.

Prove that if n 1 (mod 3) it’s impossible to reach a state with only two markers remaining. (In fact the converse is true as well.)

Problem 1H.       PICLet p be a prime and F1 = F2 = 1, Fn+2 = Fn+1 + Fn be the Fibonacci sequence. Show that F2p(p21) is divisible by p.

2  Metric spaces

At the time of writing, I’m convinced that metric topology is the morally correct way to motivate point-set topology as well as to generalize normal calculus.1 So here is my best attempt.

The concept of a metric space is very “concrete”, and lends itself easily to visualization. Hence throughout this chapter you should draw lots of pictures as you learn about new objects, like convergent sequences, open sets, closed sets, and so on.

2.1  Definition and examples of metric spaces

Prototypical example for this section: 2, with the Euclidean metric.

Definition 2.1.1. A metric space is a pair (M,d) consisting of a set of points M and a metric d : M × M 0. The distance function must obey:

Abuse of Notation 2.1.2. Just like with groups, we will abbreviate (M,d) as just M.

Example 2.1.3 (Metric spaces of )

(a)
The real line is a metric space under the metric d(x,y) = |x − y|.
(b)
The interval [0,1] is also a metric space with the same distance function.
(c)
In fact, any subset S of can be made into a metric space in this way.

Example 2.1.4 (Metric spaces of 2)

(a)
We can make 2 into a metric space by imposing the Euclidean distance function
d((x ,y ),(x ,y )) = ∘ (x-−-x--)2-+-(y-−-y-)2.
    1  1    2  2        1    2      1   2
(b)
Just like with the first example, any subset of 2 also becomes a metric space after we inherit it. The unit disk, unit circle, and the unit square [0,1]2 are special cases.

Example 2.1.5 (Taxicab on 2)
It is also possible to place the taxicab distance on 2:

d ((x1,y1),(x2,y2)) = |x1 − x2| + |y1 − y2|.

For now, we will use the more natural Euclidean metric.

Example 2.1.6 (Metric spaces of n)
We can generalize the above examples easily. Let n be a positive integer.

(a)
We let n be the metric space whose points are points in n-dimensional Euclidean space, and whose metric is the Euclidean metric
                           ∘ --------2----------------2
d((a1,...,an ),(b1,...,bn)) =  (a1 − b1) + ⋅⋅⋅+ (an − bn) .

This is the n-dimensional Euclidean space.

(b)
The open unit ball Bn is the subset of n consisting of those points (x ,...,x )
  1      n such that x12 + ⋅⋅⋅ + xn2 < 1.
(c)
The unit sphere Sn1 is the subset of n consisting of those points (x1,...,xn) such that x12 + ⋅⋅⋅ + xn2 = 1, with the inherited metric. (The superscript n 1 indicates that Sn1 is an n 1 dimensional space, even though it lives in n-dimensional space.) For example, S1 2 is the unit circle, whose distance between two points is the length of the chord joining them. You can also think of it as the “boundary” of the unit ball Bn.

Example 2.1.7 (Function space)
We can let M be the space of continuous functions f : [0,1] and define the metric by d(f,g) = 01|f − g| dx. (It admittedly takes some work to check d(f,g) = 0 implies f = g, but we won’t worry about that yet.)

Here is a slightly more pathological example.

Example 2.1.8 (Discrete space)
Let S be any set of points (either finite or infinite). We can make S into a discrete space by declaring

         {
          1   if x ⁄= y
d(x,y) =  0   if x = y.

If |S | = 4 you might think of this space as the vertices of a regular tetrahedron, living in 3. But for larger S it’s not so easy to visualize…

Example 2.1.9 (Graphs are metric spaces)
Any connected simple graph G can be made into a metric space by defining the distance between two vertices to be the graph-theoretic distance between them. (The discrete metric is the special case when G is the complete graph on S.)

Question 2.1.10. Check the conditions of a metric space for the metrics on the discrete space and for the connected graph.

Abuse of Notation 2.1.11. From now on, we will refer to n with the Euclidean metric by just n. Moreover, if we wish to take the metric space for a subset S n with the inherited metric, we will just write S.

2.2  Convergence in metric spaces

Prototypical example for this section: The sequence 1
n (for n = 1,2,) in .

Since we can talk about the distance between two points, we can talk about what it means for a sequence of points to converge. This is the same as the typical epsilon-delta definition, with absolute values replaced by the distance function.

Definition 2.2.1. Let (xn)n1 be a sequence of points in a metric space M. We say that xn converges to x if the following condition holds: for all 𝜀 > 0, there is an integer N (depending on 𝜀) such that d(xn,x) < 𝜀 for each n N. This is written

xn →  x

or more verbosely as

nli→m∞ xn = x.

We say that a sequence converges in M if it converges to a point in M.

You should check that this definition coincides with your intuitive notion of “converges”.

Abuse of Notation 2.2.2. If the parent space M is understood, we will allow ourselves to abbreviate “converges in M” to just “converges”. However, keep in mind that convergence is defined relative to the parent space; the “limit” of the space must actually be a point in M for a sequence to converge.

Example 2.2.3
Consider the sequence x1 = 1, x2 = 1.4, x3 = 1.41, x4 = 1.414, ….

(a)
If we view this as a sequence in , it converges to √2--.
(b)
However, even though each xi is in , this sequence does NOT converge when we view it as a sequence in !

Question 2.2.4. What are the convergent sequences in a discrete metric space?

2.3  Continuous maps

In calculus you were also told (or have at least heard) of what it means for a function to be continuous. Probably something like

A function f : is continuous at a point p if for every 𝜀 > 0 there exists a δ > 0 such that |x− p| < δ =⇒|f(x)− f (p )| < 𝜀.

Question 2.3.1. Can you guess what the corresponding definition for metric spaces is?

All we have do is replace the absolute values with the more general distance functions: this gives us a definition of continuity for any function M N.

Definition 2.3.2. Let M = (M,dM) and N = (N,dN) be metric spaces. A function f : M N is continuous at a point p M if for every 𝜀 > 0 there exists a δ > 0 such that

dM (x,p) < δ =⇒  dN (f(x),f(p)) < 𝜀.

Moreover, the entire function f is continuous if it is continuous at every point p M.

Notice that, just like in our definition of an isomorphism of a group, we use the metric of M for one condition and the metric of N for the other condition.

This generalization is nice because it tells us immediately how we could carry over continuity arguments in to more general spaces like . Nonetheless, this definition is kind of cumbersome to work with, because it makes extensive use of the real numbers (epsilons and deltas). Here is an equivalent condition.

Theorem 2.3.3 (Sequential continuity)
A function f : M N of metric spaces is continuous at a point p M if and only if the following property holds: if x1, x2, …is a sequence in M converging to p, then the sequence f(x1), f(x2), …in N converges to f(p).

Proof. One direction is not too hard:

Exercise 2.3.4. Show that 𝜀-δ continuity implies sequential continuity at each point.

Conversely, we will prove if f is not 𝜀-δ continuous at p then it does not preserve convergence.

If f is not continuous at p, then there is a “bad” 𝜀 > 0, which we now consider fixed. So for each choice of δ = 1∕n, there should be some point xn which is within δ of p, but which is mapped more than 𝜀 away from f(p). But then the sequence xn converges to p, and f(xn) is always at least 𝜀 away from f(p), contradiction. □

Example application showcasing the niceness of sequential continuity:

Proposition 2.3.5 (Composition of continuous functions is continuous)
Let f : M N and g: N L be continuous maps of metric spaces. Then their composition g f is continuous.

Proof. Dead simple with sequences: Let p M be arbitrary and let xn p in M. Then f(xn) f(p) in N and g(f(xn)) g(f(p)) in L, QED. □

Question 2.3.6. Let M be any metric space and D a discrete space. When is a map f : D M continuous?

2.4  Homeomorphisms

Prototypical example for this section: The unit circle S1 is homeomorphic to the boundary of the unit square.

When do we consider two groups to be the same? Answer: if there’s a structure-preserving map between them which is also a bijection. For metric spaces, we do exactly the same thing, but replace “structure-preserving” with “continuous”.

Definition 2.4.1. Let M and N be metric spaces. A function f : M N is a homeomorphism if it is a bijection, and both f : M N and its inverse f1 : N M are continuous. We say M and N are homeomorphic.

Needless to say, homeomorphism is an equivalence relation.

You might be surprised that we require f1 to also be continuous. Here’s the reason: you can show that if ϕ is an isomorphism of groups, then ϕ1 also preserves the group operation, hence ϕ1 is itself an isomorphism. The same is not true for continuous bijections, which is why we need the new condition.

Example 2.4.2 (Homeomorphism continuous bijection)

(a)
There is a continuous bijection from [0,1) to the circle, but it has no continuous inverse.
(b)
Let M be a discrete space with size ||. Then there is a continuous function M which certainly has no continuous inverse.

Note that this is the topologist’s definition of “same” – homeomorphisms are “continuous deformations”. Here are some examples.

Example 2.4.3 (Examples of homeomorphisms)

(a)
Any space M is homeomorphic to itself through the identity map.
(b)
The old saying: a doughnut (torus) is homeomorphic to a coffee cup. (Look this up if you haven’t heard of it.)
(c)
The unit circle S1 is homeomorphic to the boundary of the unit square. Here’s one bijection between them, after an appropriate scaling:

Example 2.4.4 (Metrics on the unit circle)
It may have seemed strange that our metric function on S1 was the one inherited from 2, meaning the distance between two points on the circle was defined to be the length of the chord. Wouldn’t it have made more sense to use the circumference of the smaller arc joining the two points?

In fact, it doesn’t matter: if we consider S1 with the “chord” metric and the “arc” metric, we get two homeomorphic spaces, as the map between them is continuous.

The same goes for Sn1 for general n.

Example 2.4.5 (Homeomorphisms really don’t preserve size)
Surprisingly, the open interval (1,1) is homeomorphic to the real line ! One bijection is given by

x ↦→ tan(xπ ∕2)

with the inverse being given by t↦→2
π arctan(t).

This might come as a surprise, since (1,1) doesn’t look that much like ; the former is “bounded” while the latter is “unbounded”.

2.5  Extended example/definition: product metric

Prototypical example for this section: × is 2.

Here is an extended example which will occur later on. Let M = (M,dM) and N = (N,dN) be metric spaces (say, M = N = ). Our goal is to define a metric space on M × N.

Let pi = (xi,yi) M × N for i = 1,2. Consider the following metrics on the set of points M × N:

dmax(p1,p2) : = max{dM (x1,x2 ),dN (y1,y2)}
dEuclid(p1,p2) : =   ------------------------
∘ dM (x1,x2)2 + dN (y1,y2)2
dtaxicab(p1,p2) := dM(x1,x2) + dN(y1,y2).

All of these are good candidates. We are about to see it doesn’t matter which one we use:

Exercise 2.5.1. Verify that

dmax(p1,p2) ≤ dEuclid(p1,p2) ≤ dtaxicab(p1,p2) ≤ 2dmax(p1,p2).

Use this to show that the metric spaces we obtain by imposing any of the three metrics are homeomorphic, with the homeomorphism being just the identity map.

Definition 2.5.2. Hence we will usually simply refer to the metric on M × N, called the product metric. It will not be important which of the three metrics we select.

Example 2.5.3 (2)
If M = N = , we get 2, the Euclidean plane. The metric dEuclid is the one we started with, but using either of the other two metric works fine as well.

The product metric plays well with convergence of sequences.

Proposition 2.5.4 (Convergence in the product metric is by component)
We have (xn,yn) (x,y) if and only if xn x and yn y.

Proof. We have dmax((x,y),(xn,yn)) = max{dM (x,xn),dN(y,yn)} and the latter approaches zero as n →∞ if and only if dM(x,xn) 0 and dN(y,yn) 0. □

Let’s see an application of this:

Proposition 2.5.5 (Addition and multiplication are continuous)
The addition and multiplication maps are continuous maps × .

Proof. For multiplication: for any n we have

xnyn = (x + (xn − x ))(y + (yn − y))
= xy + y(xn x) + x(yn y) + (xn x)(yn y)
= ⇒ |xnyn − xy| |y| |xn − x| + |x| |yn − y | + |xn − x | |yn − y|.

As n →∞, all three terms on the right-hand side tend to zero. The proof that +: × is continuous is similar (and easier): one notes for any n that

|(xn + yn)− (x + y)| ≤ |xn − x |+ |yn − y|

and both terms on the right-hand side tend to zero as n →∞. □

??  covers the other two operations, subtraction and division. The upshot of this is that, since compositions are also continuous, most of your naturally arising real-valued functions will automatically be continuous as well. For example, the function -32x-
x +1 will be a continuous function from , since it can be obtained by composing +, ×, ÷.

2.6  Open sets

Prototypical example for this section: The open disk x2 + y2 < r2 in 2.

Continuity is really about what happens “locally”: how a function behaves “close to a certain point p”. One way to capture this notion of “closeness” is to use metrics as we’ve done above. In this way we can define an r-neighborhood of a point.

Definition 2.6.1. Let M be a metric space. For each real number r > 0 and point p M, we define

Mr (p) := {x ∈ M : d(x,p) < r} .

The set Mr(p) is called an r-neighborhood of p.

We can rephrase convergence more succinctly in terms of r-neighborhoods. Specifically, a sequence (xn) converges to x if for every r-neighborhood of x, all terms of xn eventually stay within that r-neighborhood.

Let’s try to do the same with functions.

Question 2.6.2. In terms of r-neighborhoods, what does it mean for a function f : M N to be continuous at a point p M?

Essentially, we require that the pre-image of every 𝜀-neighborhood has the property that some δ-neighborhood exists inside it. This motivates:

Definition 2.6.3. A set U M is open in M if for each p U, some r-neighborhood of p is contained inside U. In other words, there exists r > 0 such that Mr(p) U.

Abuse of Notation 2.6.4. Note that a set being open is defined relative to the parent space M. However, if M is understood we can abbreviate “open in M” to just “open”.

Figure 2.1:The set of points x2 + y2 < 1 in 2 is open in 2.

Example 2.6.5 (Examples of open sets)

(a)
Any r-neighborhood of a point is open.
(b)
Open intervals of are open in , hence the name! This is the prototypical example to keep in mind.
(c)
The open unit ball Bn is open in n for the same reason.
(d)
In particular, the open interval (0,1) is open in . However, if we embed it in 2, it is no longer open!
(e)
The empty set and the whole set of points M are open in M.

Example 2.6.6 (Non-examples of open sets)

(a)
The closed interval [0,1] is not open in . There is no 𝜀-neighborhood of the point 0 which is contained in [0,1].
(b)
The unit circle S1 is not open in 2.

Question 2.6.7. What are the open sets of the discrete space?

Here are two quite important properties of open sets.

Proposition 2.6.8 (Intersections and unions of open sets)

(a)
The intersection of finitely many open sets is open.
(b)
The union of open sets is open, even if there are infinitely many.

Question 2.6.9. Convince yourself this is true.

Exercise 2.6.10. Exhibit an infinite collection of open sets in whose intersection is the set {0}. This implies that infinite intersections of open sets are not necessarily open.

The whole upshot of this is:

Theorem 2.6.11 (Open set condition)
A function f : M N of metric spaces is continuous if and only if the pre-image of every open set in N is open in M.

Proof. I’ll just do one direction…

Exercise 2.6.12. Show that δ-𝜀 continuity follows from the open set condition.

Now assume f is continuous. First, suppose V is an open subset of the metric space N; let U = fpre(V ). Pick x U, so y = f(x) V ; we want an open neighborhood of x inside U.

As V is open, there is some small 𝜀-neighborhood around y which is contained inside V . By continuity of f, we can find a δ such that the δ-neighborhood of x gets mapped by f into the 𝜀-neighborhood in N, which in particular lives inside V . Thus the δ-neighborhood lives in U, as desired. □

2.7  Closed sets

Prototypical example for this section: The closed unit disk x2 + y2 r2 in 2.

It would be criminal for me to talk about open sets without talking about closed sets. The name “closed” comes from the definition in a metric space.

Definition 2.7.1. Let M be a metric space. A subset S M is closed in M if the following property holds: let x1, x2, …be a sequence of points in S and suppose that xn converges to x in M. Then x S as well.

Abuse of Notation 2.7.2. Same caveat: we abbreviate “closed in M” to just “closed” if the parent space M is understood.

Here’s another way to phrase it. The limit points of a subset S M are defined by

lim S :=  {p ∈ M : ∃(xn) ∈ S such that xn → p}.

Thus S is closed if and only if S = limS.

Exercise 2.7.3. Prove that limS is closed even if S isn’t closed. (Draw a picture.)

For this reason, limS is also called the closure of S in M, and denoted S. It is simply the smallest closed set which contains S.

Example 2.7.4 (Examples of closed sets)

(a)
The empty set is closed in M for vacuous reasons: there are no sequences of points with elements in .
(b)
The entire space M is closed in M for tautological reasons. (Verify this!)
(c)
The closed interval [0,1] in is closed in , hence the name. Like with open sets, this is the prototypical example of a closed set to keep in mind!
(d)
In fact, the closed interval [0,1] is even closed in 2.

Example 2.7.5 (Non-examples of closed sets)
Let S = (0,1) denote the open interval. Then S is not closed in because the sequence of points

1, 1, 1, ...
2  4  8

converges to 0 , but 0∕∈(0,1).

I should now warn you about a confusing part of this terminology. Firstly, “most” sets are neither open nor closed.

Example 2.7.6 (A set neither open nor closed)
The half-open interval [0,1) is neither open nor closed in .

Secondly, it’s also possible for a set to be both open and closed; this will be discussed in ?? .

The reason for the opposing terms is the following theorem:

Theorem 2.7.7 (Closed sets are complements of open sets)
Let M be a metric space, and S M any subset. Then the following are equivalent:

Exercise 2.7.8 (Great). Prove this theorem! You’ll want to draw a picture to make it clear what’s happening: for example, you might take M = 2 and S to be the closed unit disk.

2.8  A few harder problems to think about

Problem 2A. Let M = (M,d) be a metric space. Show that

d : M × M  → ℝ

is itself a continuous function (where M × M is equipped with the product metric).

Problem 2B. Are and homeomorphic subspaces of ?

Problem 2C (Continuity of arithmetic continued). Show that subtraction is a continuous map : × , and division is a continuous map ÷: × >0 .

Problem 2D. Exhibit a function f : such that f is continuous at x if and only if x = 0.

Problem 2E.     PICPICProve that a function f : which is strictly increasing must be continuous at some point.

Part II
Basic Abstract Algebra

3  Homomorphisms and quotient groups

3.1  Generators and group presentations

Prototypical example for this section: D2n = ⟨               ⟩
 r,s | rn = s2 = 1

Let G be a group. Recall that for some element x G, we could consider the subgroup

{                       }
  ...,x− 2,x−1,1,x,x2,...

of G. Here’s a more pictorial version of what we did: put x in a box, seal it tightly, and shake vigorously. Using just the element x, we get a pretty explosion that produces the subgroup above.

What happens if we put two elements x, y in the box? Among the elements that get produced are things like

          2 9  −5 3    −2015
xyxyx,   x y x   y ,  y    ,  ...

Essentially, I can create any finite product of x, y, x1, y1. This leads us to define:

Definition 3.1.1. Let S be a subset of G. The subgroup generated by S, denoted ⟨S ⟩, is the set of elements which can be written as a finite product of elements in S (and their inverses). If ⟨S⟩ = G then we say S is a set of generators for G, as the elements of S together create all of G.

Exercise 3.1.2. Why is the condition “and their inverses” not necessary if G is a finite group? (As usual, assume Lagrange’s theorem.)

Example 3.1.3 (is the infinite cyclic group)
Consider 1 as an element of = (,+). We see ⟨1⟩ = , meaning {1} generates . It’s important that 1, the inverse of 1 is also allowed: we need it to write all integers as the sum of 1 and 1.

This gives us an idea for a way to try and express groups compactly. Why not just write down a list of generators for the groups? For example, we could write

ℤ ∼= ⟨a⟩

meaning that is just the group generated by one element.

There’s one issue: the generators usually satisfy certain properties. For example, consider 100. It’s also generated by a single element x, but this x has the additional property that x100 = 1. This motivates us to write

          ⟨          ⟩
ℤ ∕100ℤ =  x | x100 = 1 .

I’m sure you can see where this is going. All we have to do is specify a set of generators and relations between the generators, and say that two elements are equal if and only if you can get from one to the other using relations. Such an expression is appropriately called a group presentation.

Example 3.1.4 (Dihedral group)
The dihedral group of order 2n has a presentation

      ⟨     n    2            −1⟩
D2n =  r,s | r = s = 1,rs = sr   .

Thus each element of D2n can be written uniquely in the form rα or srα, where α = 0,1,,n 1.

Example 3.1.5 (Klein four group)
The Klein four group, isomorphic to 2×2, is given by the presentation

⟨                       ⟩
 a,b | a2 = b2 = 1,ab = ba .

Example 3.1.6 (Free group)
The free group on n elements is the group whose presentation has n generators and no relations at all. It is denoted Fn, so

Fn = ⟨x1,x2,...,xn⟩.

In other words, F2 = ⟨a,b⟩ is the set of strings formed by appending finitely many copies of a, b, a1, b1 together.

Question 3.1.7. Notice that F1∼=.

Abuse of Notation 3.1.8. One might unfortunately notice that “subgroup generated by a and b” has exactly the same notation as the free group ⟨a,b⟩. We’ll try to be clear based on context which one we mean.

Presentations are nice because they provide a compact way to write down groups. They do have some shortcomings, though.1

Example 3.1.9 (Presentations can look very different)
The same group can have very different presentations. For instance consider

D2n = ⟨x, y | x2 = y2 = 1,(xy)n = 1.⟩ .

(To see why this is equivalent, set x = s, y = rs.)

3.2  Homomorphisms

Prototypical example for this section: The “mod out by 100” map, 100.

How can groups talk to each other?

Two groups are “the same” if we can write an isomorphism between them. And as we saw, two metric spaces are “the same” if we can write a homeomorphism between them. But what’s the group analogy of a continuous map? We simply drop the “bijection” condition.

Definition 3.2.1. Let G = (G,⋆) and H = (H,) be groups. A group homomorphism is a map ϕ : G H such that for any g1,g2 G we have

ϕ(g1 ⋆ g2) = ϕ (g1) ∗ϕ(g2).

(Not to be confused with “homeomorphism” from last chapter: note the spelling.)

Example 3.2.2 (Examples of homomorphisms)
Let G and H be groups.

(a)
Any isomorphism G H is a homomorphism. In particular, the identity map G G is a homomorphism.
(b)
The trivial homomorphism G H sends everything to 1H.
(c)
There is a homomorphism from to 100by sending each integer to its residue modulo 100.
(d)
There is a homomorphism from to itself by x↦→10x which is injective but not surjective.
(e)
There is a homomorphism from Sn to Sn+1 by “embedding”: every permutation on {1,,n} can be thought of as a permutation on {1,,n + 1} if we simply let n + 1 be a fixed point.
(f)
A homomorphism ϕ : D12 D6 is given by s12↦→s6 and r12↦→r6.
(g)
Specifying a homomorphism G is the same as specifying just the image of the element 1 . Why?

The last two examples illustrate something: suppose we have a presentation of G. To specify a homomorphism G H, we only have to specify where each generator of G goes, in such a way that the relations are all satisfied.

Important remark: the right way to think about an isomorphism is as a “bijective homomorphism”. To be explicit,

Exercise 3.2.3. Show that G∼=H if and only if there exist homomorphisms ϕ: G H and ψ: H G such that ϕ ψ = idH and ψ ϕ = idG.

So the definitions of homeomorphism of metric spaces and isomorphism of groups are not too different.

Some obvious properties of homomorphisms follow.

Fact 3.2.4. Let ϕ: G H be a homomorphism. Then ϕ(1G) = 1H and ϕ(g1) = ϕ(g)1.

Proof. Boring, and I’m sure you could do it yourself if you wanted to. □

Now let me define a very important property of a homomorphism.

Definition 3.2.5. The kernel of a homomorphism ϕ: G H is defined by

kerϕ := {g ∈ G : ϕ(g) = 1H} .

It is a subgroup of G (in particular, 1G kerϕ for obvious reasons).

Question 3.2.6. Verify that kerϕ is in fact a subgroup of G.

We also have the following important fact, which we also encourage the reader to verify.

Proposition 3.2.7 (Kernel determines injectivity)
The map ϕ is injective if and only if kerϕ = {1G}.

To make this concrete, let’s compute the kernel of each of our examples.

Example 3.2.8 (Examples of kernels)

(a)
The kernel of any isomorphism G H is trivial, since an isomorphism is injective. In particular, the kernel of the identity map G G is {1G}.
(b)
The kernel of the trivial homomorphism G H (by g↦→1H) is all of G.
(c)
The kernel of the homomorphism 100by n↦→n is precisely
100ℤ = {...,− 200,− 100,0,100,200,...}.
(d)
The kernel of the map by x↦→10x is trivial: {0}.
(e)
There is a homomorphism from Sn to Sn+1 by “embedding”, but it also has trivial kernel because it is injective.
(f)
A homomorphism ϕ: D12 D6 is given by s12↦→s6 and r12↦→r6. You can check that
        {    }
ker ϕ =  1,r312 ∼= ℤ ∕2ℤ.
(g)
Exercise below.

Exercise 3.2.9. Fix any g G. Suppose we have a homomorphism G by n↦→gn. What is the kernel?

Question 3.2.10. Show that for any homomorphism ϕ : G H, the image ϕimg(G) is a subgroup of H. Hence, we’ll be especially interested in the case where ϕ is surjective.

3.3  Cosets and modding out

Prototypical example for this section: Modding out by n: (n )∼=∕n.

The next few sections are a bit dense. If this exposition doesn’t work for you, try [?].

Let G and Q be groups, and suppose there exists a surjective homomorphism

ϕ : G ↠ Q.

In other words, if ϕ is injective then ϕ : G Q is a bijection, and hence an isomorphism. But suppose we’re not so lucky and kerϕ is bigger than just {1G}. What is the correct interpretation of a more general homomorphism?

Let’s look at the special case where ϕ : 100is “modding out by 100”. We already saw that the kernel of this map is

kerϕ = 100ℤ = {...,− 200,− 100, 0,100, 200,...} .

Recall now that kerϕ is a subgroup of G. What this means is that ϕ is indifferent to the subgroup 100of :

ϕ(15) = ϕ (2000 + 15) = ϕ(− 300+ 15) = ϕ(700 + 15) = ....

So 100is what we get when we “mod out by 100”. Cool.

In other words, let G be a group and ϕ : G Q be a surjective homomorphism with kernel N G.

We claim that Q should be thought of as the quotient of G by N.

To formalize this, we will define a so-called quotient group G∕N in terms of G and N only (without referencing Q) which will be naturally isomorphic to Q.

For motivation, let’s give a concrete description of Q using just ϕ and G. Continuing our previous example, let N = 100be our subgroup of G. Consider the sets

N = {...,− 200,− 100,0,100,200,...}
1 + N = {...,− 199,− 99,1,101,201,...}
2 + N = {...,− 198,− 98,2,102,202,...}
.
..
99 + N = {...,− 101,− 1,99,199,299,...}.

The elements of each set all have the same image when we apply ϕ, and moreover any two elements in different sets have different images. Then the main idea is to notice that

We can think of Q as the group whose elements are the sets above.

Thus, given ϕ we define an equivalence relation N on G by saying x Ny for ϕ(x) = ϕ(y). This N divides G into several equivalence classes in G which are in obvious bijection with Q, as above. Now we claim that we can write these equivalence classes very explicitly.

Exercise 3.3.1. Show that x Ny if and only if x = yn for some n N (in the mod 100 example, this means they “differ by some multiple of 100”). Thus for any g G, the equivalence class of N which contains g is given explicitly by

gN := {gn | n ∈ N} .

Here’s the word that describes the types of sets we’re running into now.

Definition 3.3.2. Let H be any subgroup of G (not necessarily the kernel of some homomorphism). A set of the form gH is called a left coset of H.

Remark 3.3.3 — Although the notation might not suggest it, keep in mind that g1N is often equal to g2N even if g1g2. In the “mod 100” example, 3 + N = 103 + N. In other words, these cosets are sets.

This means that if I write “let gH be a coset” without telling you what g is, you can’t figure out which g I chose from just the coset itself. If you don’t believe me, here’s an example of what I mean:

x + 100ℤ = {...,− 97,3,103,203,...} = ⇒ x = ?.

There’s no reason to think I picked x = 3. (I actually picked x = 13597.)

Remark 3.3.4 — Given cosets g1H and g2H, you can check that the map x↦→g2g11x is a bijection between them. So actually, all cosets have the same cardinality.

So, long story short,

Elements of the group Q are naturally identified with left cosets of N.

In practice, people often still prefer to picture elements of Q as single points (for example it’s easier to think of 2as {0,1} rather than {{,2,0,2,},{,1,1,3,}}). If you like this picture, then you might then draw G as a bunch of equally tall fibers (the cosets), which are then “collapsed” onto Q.

Now that we’ve done this, we can give an intrinsic definition for the quotient group we alluded to earlier.

Definition 3.3.5. A subgroup N of G is called normal if it is the kernel of some homomorphism. We write this as N G.

Definition 3.3.6. Let N G. Then the quotient group, denoted G∕N (and read “G mod N”), is the group defined as follows.

And now you know why the integers modulo n are often written ∕n!

Question 3.3.7. Take a moment to digest the above definition.

By the way we’ve built it, the resulting group G∕N is isomorphic to Q. In a sense we think of G∕N as “G modulo the condition that n = 1 for all n N”.

3.4  (Optional) Proof of Lagrange’s theorem

As an aside, with the language of cosets we can now show Lagrange’s theorem in the general case.

Theorem 3.4.1 (Lagrange’s theorem)
Let G be a finite group, and let H be any subgroup. Then |H | divides |G|.

The proof is very simple: note that the cosets of H all have the same size and form a partition of G (even when H is not necessarily normal). Hence if n is the number of cosets, then n |H | = |G |.

Question 3.4.2. Conclude that x|G| = 1 by taking H = ⟨x⟩G.

Remark 3.4.3 — It should be mentioned at this point that in general, if G is a finite group and N is normal, then |G∕N| = |G||N|.

3.5  Eliminating the homomorphism

Prototypical example for this section: Again ∕n∼
=∕n.

Let’s look at the last definition of G∕N we provided. The short version is:

Question: where do we actually use the fact that N is normal? We don’t talk about ϕ or Q anywhere in this definition.

The answer is in ?? . The group operation takes in two cosets, so it doesn’t know what g1 and g2 are. But behind the scenes, the normal condition guarantees that the group operation can pick any g1 and g2 it wants and still end up with the same coset. If we didn’t have this property, then it would be hard to define the product of two cosets C1 and C2 because it might make a difference which g1 C1 and g2 C2 we picked. The fact that N came from a homomorphism meant we could pick any representatives g1 and g2 of the cosets we wanted, because they all had the same ϕ-value.

We want some conditions which force this to be true without referencing ϕ at all. Suppose ϕ: G K is a homomorphism of groups with H = kerϕ. Aside from the fact H is a group, we can get an “obvious” property:

Question 3.5.1. Show that if h H, g G, then ghg1 H. (Check ϕ(ghg1) = 1K.)

Example 3.5.2 (Example of a non-normal subgroup)
Let D12 = ⟨r,s | r6 = s2 = 1,rs = sr−1⟩. Consider the subgroup of order two H = {1,s} and notice that

   − 1      −1            2
rsr   = r(sr  ) = r(rs) = r s ∕∈ H.

Hence H is not normal, and cannot be the kernel of any homomorphism.

Well, duh – so what? Amazingly it turns out that that this is the sufficient condition we want. Specifically, it makes the nice “coset multiplication” we wanted work out.

Remark 3.5.3 (For math contest enthusiasts) This coincidence is really a lot like functional equations at the IMO. We all know that normal subgroups H satisfy ghg1 H; the surprise is that from the latter seemingly weaker condition, we can deduce H is normal.

Thus we have a new criterion for “normal” subgroups which does not make any external references to ϕ.

Theorem 3.5.4 (Algebraic condition for normal subgroups)
Let H be a subgroup of G. Then the following are equivalent:

Proof. We already showed one direction.

For the other direction, we need to build a homomorphism with kernel H. So we simply define the group G∕H as the cosets. To put a group operation, we need to verify:

Claim 3.5.5. If g1′∼Hg1 and g2′∼Hg2 then g1g2′∼Hg1g2.

Proof. Boring algebraic manipulation (again functional equation style). Let g1= g1h1 and g2= g2h2, so we want to show that g1h1g2h2 Hg1g2. Since H has the property, g21h1g2 is some element of H, say h3. Thus h1g2 = g2h3, and the left-hand side becomes g1g2(h3h2), which is fine since h3h2 H.

With that settled we can just define the product of two cosets (of normal subgroups) by

(g H )⋅(g H ) = (g g )H.
  1      2       1 2

Thus the claim above shows that this multiplication is well-defined (this verification is the “content” of the theorem). So G∕H is indeed a group! Moreover there is an obvious “projection” homomorphism G G∕H (with kernel H), by g↦→gH. □

Example 3.5.6 (Modding out in the product group)
Consider again the product group G × H. Earlier we identified a subgroup

  ′                  ∼
G  = {(g,1H) | g ∈ G} = G.

You can easily see that GG × H. (Easy calculation.)

Moreover, just as the notation would imply, you can check that

(G × H )∕(G ′) ∼= H.

Indeed, we have (g,h) G(1G,h) for all g G and h H.

Example 3.5.7 (Another explicit computation)
Let ϕ : D8 4be defined by

    --      --
r ↦→ 2,  s ↦→ 2.

The kernel of this map is N = {1,r2,sr,sr3}.

We can do a quick computation of all the elements of D8 to get

          2               3   --              3              2    --
ϕ(1) = ϕ(r ) = ϕ (sr) = ϕ(sr ) = 0 and ϕ (r) = ϕ(r ) = ϕ (s) = ϕ(sr ) = 2.

The two relevant fibers are

 pre --         2              3        2      3
ϕ  (0) = 1N = r N  = srN =  srN  = {1,r ,sr,sr }

and

ϕpre(2) = rN = r3N =  sN = sr2N  = {r,r3,s,sr2}.

So we see that |D8∕N| = 2 is a group of order two, or 2. Indeed, the image of ϕ is

{---}
 0,2  ∼= ℤ∕2 ℤ.

Question 3.5.8. Suppose G is abelian. Why does it follow that any subgroup of G is normal?

Finally here’s some food for thought: suppose one has a group presentation for a group G that uses n generators. Can you write it as a quotient of the form Fn∕N, where N is a normal subgroup of Fn?

3.6  (Digression) The first isomorphism theorem

One quick word about what other sources usually say.

Most textbooks actually define normal using the ghg1 H property. Then they define G∕H for normal H in the way I did above, using the coset definition

(g1H )⋅(g2H ) = g1g2H.

Using purely algebraic manipulations (like I did) this is well-defined, and so now you have this group G∕H or something. The underlying homomorphism isn’t mentioned at all, or is just mentioned in passing.

I think this is incredibly dumb. The normal condition looks like it gets pulled out of thin air and no one has any clue what’s going on, because no one has any clue what a normal subgroup actually should look like.

Other sources like to also write the so-called first isomorphism theorem.2 It goes like this.

Theorem 3.6.1 (First isomorphism theorem)
Let ϕ : G H be a homomorphism. Then G∕kerϕ is isomorphic to ϕimg(G).

To me, this is just a clumsier way of stating the same idea.

About the only merit this claim has is that if ϕ is injective, then the image ϕimg(G) is an isomorphic copy of G inside the group H. (Try to see this directly!) This is a pattern we’ll often see in other branches of mathematics: whenever we have an injective structure-preserving map, often the image of this map will be some “copy” of G. (Here “structure” refers to the group multiplication, but we’ll see some more other examples of “types of objects” later!)

In that sense an injective homomorphism ϕ : G`→H is an embedding of G into H.

3.7  A few harder problems to think about

Problem 3A (18.701 at MIT). Determine all groups G for which the map ϕ : G G defined by

ϕ (g) = g2

is a homomorphism.

Problem 3B. Consider the dihedral group G = D10.

(a)
Is H = ⟨r⟩ a normal subgroup of G? If so, compute G∕H up to isomorphism.
(b)
Is H = ⟨s⟩ a normal subgroup of G? If so, compute G∕H up to isomorphism.

Problem 3C. Does S4 have a normal subgroup of order 3?

Problem 3D. Let G and H be finite groups, where |G| = 1000 and |H | = 999. Show that a homomorphism G H must be trivial.

Problem 3E. Let × denote the nonzero complex numbers under multiplication. Show that there are five homomorphisms 5× but only two homomorphisms D10 ×, even though 5is a subgroup of D10.

Problem 3F.       PICFind a non-abelian group G such that every subgroup of G is normal. (These groups are called Hamiltonian.)

Problem 3G (PRIMES entrance exam, 2017).       PICLet G be a group with presentation given by

    ⟨                                              ⟩
G =  a,b,c | ab = c2a4, bc = ca6, ac = ca8, c2018 = b2019 .

Determine the order of G.

Problem 3H (Homophony group).       PICThe homophony group (of English) is the group with 26 generators a, b, …, z and one relation for every pair of English words which sound the same. For example knight = night (and hence k = 1). Prove that the group is trivial.

4  Rings and ideals

4.1  Some motivational metaphors about rings vs groups

In this chapter we’ll introduce the notion of a commutative ring R. It is a larger structure than a group: it will have two operations addition and multiplication, rather than just one. We will then immediately define a ring homomorphism R S between pairs of rings.

This time, instead of having normal subgroups H G, rings will instead have subsets I R called ideals, which are not themselves rings but satisfy some niceness conditions. We will then show how you to define R∕I, in analogy to G∕H as before. Finally, like with groups, we will talk a bit about how to generate ideals.

Here is a possibly helpful table of analogies to help you keep track:

Group Ring



Notation G R
Operations +, ×
Commutativity only if abelian for us, always
Sub-structure subgroup (not discussed)
Homomorphismgrp hom. G Hring hom. R S
Kernel normal subgroup ideal
Quotient G∕H R∕I

4.2  (Optional) Pedagogical notes on motivation

I wrote most of these examples with a number theoretic eye in mind; thus if you liked elementary number theory, a lot of your intuition will carry over. Basically, we’ll try to generalize properties of the ring to any abelian structure in which we can also multiply. That’s why, for example, you can talk about “irreducible polynomials in [x]” in the same way you can talk about “primes in ”, or about “factoring polynomials modulo p” in the same way we can talk “unique factorization in ”. Even if you only care about (say, you’re a number theorist), this has a lot of value: I assure you that trying to solve xn + yn = zn (for n > 2) requires going into a ring other than !

Thus for all the sections that follow, keep in mind as your prototype.

I mention this here because commutative algebra is also closely tied to algebraic geometry. Lots of the ideas in commutative algebra have nice “geometric” interpretations that motivate the definitions, and these connections are explored in the corresponding part later. So, I want to admit outright that this is not the only good way (perhaps not even the most natural one) of motivating what is to follow.

4.3  Definition and examples of rings

Prototypical example for this section: all the way! Also R[x] and various fields (next section).

Well, I guess I’ll define a ring1 .

Definition 4.3.1. A ring is a triple (R,+,×), the two operations usually called addition and multiplication, such that

(i)
(R,+) is an abelian group, with identity 0R, or just 0.
(ii)
× is an associative, binary operation on R with some identity, written 1R or just 1.
(iii)
Multiplication distributes over addition.

The ring R is commutative if × is commutative.

Abuse of Notation 4.3.2. As usual, we will abbreviate (R,+,×) to R.

Abuse of Notation 4.3.3. For simplicity, assume all rings are commutative for the rest of this chapter. We’ll run into some noncommutative rings eventually, but for such rings we won’t need the full theory of this chapter anyways.

These definitions are just here for completeness. The examples are much more important.

Example 4.3.4 (Typical rings)

(a)
The sets , , and are all rings with the usual addition and multiplication.
(b)
The integers modulo n are also a ring with the usual addition and multiplication. We also denote it by ∕n.

Here is also a trivial example.

Definition 4.3.5. The zero ring is the ring R with a single element. We denote the zero ring by 0. A ring is nontrivial if it is not the zero ring.

Exercise 4.3.6 (Comedic). Show that a ring is nontrivial if and only if 0R1R.

Since I’ve defined this structure, I may as well state the obligatory facts about it.

Fact 4.3.7. For any ring R and r R, r 0R = 0R. Moreover, r (1R) = r.

Here are some more examples of rings.

Example 4.3.8 (Product ring)
Given two rings R and S the product ring, denoted R × S, is defined as ordered pairs (r,s) with both operations done component-wise. For example, the Chinese remainder theorem says that

ℤ ∕15ℤ ∼= ℤ∕3ℤ × ℤ ∕5ℤ

with the isomorphism n mod 15↦→(n mod 3,n mod 5).

Remark 4.3.9 — Equivalently, we can define R × S as the abelian group R S, and endow it with the multiplication where r s = 0 for r R, s S.

Question 4.3.10. Which (r,s) is the identity element of the product ring R × S?

Example 4.3.11 (Polynomial ring)
Given any ring R, the polynomial ring R[x] is defined as the set of polynomials with coefficients in R:

R[x] = {anxn + an−1xn−1 + ⋅⋅⋅+ a0 | a0,...,an ∈ R} .

This is pronounced “R adjoin x”. Addition and multiplication are done exactly in the way you would expect.

Remark 4.3.12 (Digression on division) Happily, polynomial division also does what we expect: if p R[x] is a polynomial, and p(a) = 0, then (x a)q(x) = p(x) for some polynomial q. Proof: do polynomial long division.

With that, note the caveat that

x2 − 1 ≡ (x− 1)(x + 1)  (mod  8)

has four roots 1, 3, 5, 7 in 8.

The problem is that 24 = 0 even though 2 and 4 are not zero; we call 2 and 4 zero divisors for that reason. In an integral domain (a ring without zero divisors), this pathology goes away, and just about everything you know about polynomials carries over. (I’ll say this all again next section.)

Example 4.3.13 (Multi-variable polynomial ring)
We can consider polynomials in n variables with coefficients in R, denoted R[x1,,xn]. (We can even adjoin infinitely many x’s if we like!)

Example 4.3.14 (Gaussian integers are a ring)
The Gaussian integers are the set of complex numbers with integer real and imaginary parts, that is

ℤ [i] = {a + bi | a,b ∈ ℤ} .

Abuse of Notation 4.3.15 (Liberal use of adjoinment). Careful readers will detect some abuse in notation here. [i] should officially be “integer-coefficient polynomials in a variable i”. However, it is understood from context that i2 = 1; and a polynomial in i = √---
 − 1 “is” a Gaussian integer.

Example 4.3.16 (Cube root of 2)
As another example (using the same abuse of notation):

        {                        }
ℤ[ 3√2] = a + b 3√2-+ c√34-| a,b,c ∈ ℤ .

4.4  Fields

Prototypical example for this section: is a field, but is not.

Although we won’t need to know what a field is until next chapter, they’re so convenient for examples I will go ahead and introduce them now.

As you might already know, if the multiplication is invertible, then we call the ring a field. To be explicit, let me write the relevant definitions.

Definition 4.4.1. A unit of a ring R is an element u R which is invertible: for some x R we have ux = 1R.

Example 4.4.2 (Examples of units)

(a)
The units of are ±1, because these are the only things which “divide 1” (which is the reason for the name “unit”).
(b)
On the other hand, in everything is a unit (except 0). For example, 3
5 is a unit since 3
5 5
3 = 1.
(c)
The Gaussian integers [i] have four units: ±1 and ±i.

Definition 4.4.3. A nontrivial (commutative) ring is a field when all its nonzero elements are units.

Colloquially, we say that

A field is a structure where you can add, subtract, multiply, and divide.

Depending on context, they are often denoted either k, K, F.

Example 4.4.4 (First examples of fields)

(a)
, , are fields, since the notion 1
c makes sense in them.
(b)
If p is a prime, then ∕pis a field, which we denote will usually denote by 𝔽p.

The trivial ring 0 is not considered a field, since we require fields to be nontrivial.

4.5  Homomorphisms

Prototypical example for this section: 5by modding out by 5.

This section is going to go briskly – it’s the obvious generalization of all the stuff we did with quotient groups.2

First, we define a homomorphism and isomorphism.

Definition 4.5.1. Let R = (R,+R,×R) and S = (S,+S,×S) be rings. A ring homomorphism is a map ϕ : R S such that

(i)
ϕ(x + Ry) = ϕ(x) + Sϕ(y) for each x,y R.
(ii)
ϕ(x ×Ry) = ϕ(x) ×Sϕ(y) for each x,y R.
(iii)
ϕ(1R) = 1S.

If ϕ is a bijection then ϕ is an isomorphism and we say that rings R and S are isomorphic.

Just what you would expect. The only surprise is that we also demand ϕ(1R) to go to 1S. This condition is not extraneous: consider the map called “multiply by zero”.

Example 4.5.2 (Examples of homomorphisms)

(a)
The identity map, as always.
(b)
The map 5modding out by 5.
(c)
The map [x] by p(x)↦→p(0) by taking the constant term.
(d)
For any ring R, there is a trivial ring homomorphism R 0.

Example 4.5.3 (Non-examples of homomorphisms)
Because we require 1R to 1S, some maps that you might have thought were homomorphisms will fail.

(a)
The map by x↦→2x is not a ring homomomorphism. Aside from the fact it sends 1 to 2, it also does not preserve multiplication.
(b)
If S is a nontrivial ring, the map R S by x↦→0 is not a ring homomorphism, even though it preserves multiplication.
(c)
There is no ring homomorphism 2016at all.

In particular, whereas for groups G and H there was always a trivial group homomorphism sending everything in G to 1H, this is not the case for rings.

4.6  Ideals

Prototypical example for this section: The multiples of 5 are an ideal of .

Now, just like we were able to mod out by groups, we’d also like to define quotient rings. So once again,

Definition 4.6.1. The kernel of a ring homomorphism ϕ: R S, denoted kerϕ, is the set of r R such that ϕ(r) = 0.

In group theory, we were able to characterize the “normal” subgroups by a few obviously necessary conditions (namely, gHg1 = H). We can do the same thing for rings, and it’s in fact easier because our operations are commutative.

First, note two obvious facts:

A (nonempty) subset I R is called an ideal if it satisfies these properties. That is,

Definition 4.6.2. A nonempty subset I R is an ideal if it is closed under addition, and for each x I, rx I for all r R. It is proper if IR.

Note that in the second condition, r need not be in I! So this is stronger than merely saying I is closed under multiplication.

Remark 4.6.3 — If R is not commutative, we also need the condition xr I. That is, the ideal is two-sided: it absorbs multiplication from both the left and the right. But since rings in Napkin are commutative we needn’t worry with this distinction.

Example 4.6.4 (Prototypical example of an ideal)
Consider the set I = 5= {,10,5,0,5,10,} as an ideal in . We indeed see I is the kernel of the “take mod 5” homomorphism:

ℤ ↠ ℤ ∕5ℤ.

It’s clearly closed under addition, but it absorbs multiplication from all elements of : given 15 I, 999 , we get 15 999 I.

Exercise 4.6.5 (Mandatory: fields have two ideals). If K is a field, show that K has exactly two ideals. What are they?

Now we claim that these conditions are sufficient. More explicitly,

Theorem 4.6.6 (Ring analog of normal subgroups)
Let R be a ring and I R. Then I is the kernel of some homomorphism if and only if it’s an ideal.

Proof. It’s quite similar to the proof for the normal subgroup thing, and you might try it yourself as an exercise.

Obviously the conditions are necessary. To see they’re sufficient, we define a ring by “cosets”

S = {r + I | r ∈ R }.

These are the equivalence where we say r1 r2 if r1 r2 I (think of this as taking “mod I”). To see that these form a ring, we have to check that the addition and multiplication we put on them is well-defined. Specifically, we want to check that if r1 s1 and r2 s2, then r1 + r2 s1 + s2 and r1r2 s1s2. We actually already did the first part – just think of R and S as abelian groups, forgetting for the moment that we can multiply. The multiplication is more interesting.

Exercise 4.6.7 (Recommended). Show that if r1 s1 and r2 s2, then r1r2 s1s2. You will need to use the fact that I absorbs multiplication from any elements of R, not just those in I.

Anyways, since this addition and multiplication is well-defined there is now a surjective homomorphism R S with kernel exactly I. □

Definition 4.6.8. Given an ideal I, we define as above the quotient ring

R ∕I := {r + I | r ∈ R} .

It’s the ring of these equivalence classes. This ring is pronounced “R mod I”.

Example 4.6.9 (5)
The integers modulo 5 formed by “modding out additively by 5” are the 5we have already met.

But here’s an important point: just as we don’t actually think of 5as consisting of k + 5for k = 0,,4, we also don’t really want to think about R∕I as elements r + I. The better way to think about it is

R∕I is the result when we declare that elements of I are all zero; that is, we “mod out by elements of I”.

For example, modding out by 5means that we consider all elements in divisible by 5 to be zero. This gives you the usual modular arithmetic!

Exercise 4.6.10. Earlier, we wrote [i] for the Gaussian integers, which was a slight abuse of notation. Convince yourself that this ring could instead be written as [x](x2+1), if we wanted to be perfectly formal. (We will stick with [i] though — it’s more natural.)

Figure out the analogous formalization of [√3-
  2].

4.7  Generating ideals

Prototypical example for this section: In , the ideals are all of the form (n).

Let’s give you some practice with ideals.

An important piece of intuition is that once an ideal contains a unit, it contains 1, and thus must contain the entire ring. That’s why the notion of “proper ideal” is useful language. To expand on that:

Proposition 4.7.1 (Proper ideal no units)
Let R be a ring and I R an ideal. Then I is proper (i.e. IR) if and only if it contains no units of R.

Proof. Suppose I contains a unit u, i.e. an element u with an inverse u1. Then it contains u u1 = 1, and thus I = R. Conversely, if I contains no units, it is obviously proper. □

As a consequence, if K is a field, then its only ideals are (0) and K (this was ?? ). So for our practice purposes, we’ll be working with rings that aren’t fields.

First practice: .

Exercise 4.7.2. Show that the only ideals of are precisely those sets of the form n, where n is a nonnegative integer.

Thus, while ideals of fields are not terribly interesting, ideals of look eerily like elements of . Let’s make this more precise.

Definition 4.7.3. Let R be a ring. The ideal generated by a set of elements x1,,xn R is denoted by I = (x1,x2,,xn) and given by

I = {r1x1 + ⋅⋅⋅+ rnxn | ri ∈ R }.

One can think of this as “the smallest ideal containing all the xi”.

The analogy of putting the {xi} in a sealed box and shaking vigorously kind of works here too.

Remark 4.7.4 (Linear algebra digression) If you know linear algebra, you can summarize this as: an ideal is an R-module. The ideal (x1,,xn) is the submodule spanned by x1,,xn.

In particular, if I = (x) then I consists of exactly the “multiples of x”, i.e. numbers of the form rx for r R.

Remark 4.7.5 — We can also apply this definition to infinite generating sets, as long as only finitely many of the ri are not zero (since infinite sums don’t make sense in general).

Example 4.7.6 (Examples of generated ideals)

(a)
As (n) = nfor all n , every ideal in is of the form (n).
(b)
In [i], we have (5) = {5a+  5bi | a,b ∈ ℤ }.
(c)
In [x], the ideal (x) consists of polynomials with zero constant terms.
(d)
In [x,y], the ideal (x,y) again consists of polynomials with zero constant terms.
(e)
In [x], the ideal (x,5) consists of polynomials whose constant term is divisible by 5.

Question 4.7.7. Please check that the set I = {r1x1 + ⋅⋅⋅+ rnxn | ri ∈ R } is indeed always an ideal (closed under addition, and absorbs multiplication).

Now suppose I = (x1,,xn). What does R∕I look like? According to what I said at the end of the last section, it’s what happens when we “mod out” by each of the elements xi. For example…

Example 4.7.8 (Modding out by generated ideals)

(a)
Let R = and I = (5). Then R∕I is literally 5, or the “integers modulo 5”: it is the result of declaring 5 = 0.
(b)
Let R = [x] and I = (x). Then R∕I means we send x to zero; hence R∕I∼
=as given any polynomial p(x) R, we simply get its constant term.
(c)
Let R = [x] again and now let I = (x 3). Then R∕I should be thought of as the quotient when x 3 0, that is, x 3. So given a polynomial p(x) its image after we mod out should be thought of as p(3). Again R∕I∼=, but in a different way.
(d)
Finally, let I = (x 3,5). Then R∕I not only sends x to three, but also 5 to zero. So given p R, we get p(3) (mod 5). Then R∕I∼=5.

Remark 4.7.9 (Mod notation) By the way, given an ideal I of a ring R, it’s totally legit to write

x ≡ y   (mod  I)

to mean that x y I. Everything you learned about modular arithmetic carries over.

4.8  Principal ideal domains

Prototypical example for this section: is a PID, [x] is not. [x] is a PID, [x,y] is not.

What happens if we put multiple generators in an ideal, like (10,15) ? Well, we have by definition that (10,15) is given as a set by

(10,15) := {10x + 15y | x,y ∈ ℤ} .

If you’re good at number theory you’ll instantly recognize this as 5= (5). Surprise! In , the ideal (a,b) is exactly gcd(a,b). And that’s exactly the reason you often see the GCD of two numbers denoted (a,b).

We call such an ideal (one generated by a single element) a principal ideal. So, in , every ideal is principal. But the same is not true in more general rings.

Example 4.8.1 (A non-principal ideal)
In [x], I = (x,2015) is not a principal ideal.

For if I = (f) for some polynomial f I then f divides x and 2015. This can only occur if f = ±1, but then I contains ±1, which it does not.

A ring with the property that all its ideals are principal is called a principal ideal ring. We like this property because they effectively let us take the “greatest common factor” in a similar way as the GCD in .

In practice, we actually usually care about so-called principal ideal domains (PID’s). But we haven’t defined what a domain is yet. Nonetheless, all the examples below are actually PID’s, so we will go ahead and use this word for now, and tell you what the additional condition is in the next chapter.

Example 4.8.2 (Examples of PID’s)
To reiterate, for now you should just verify that these are principal ideal rings, even though we are using the word PID.

(a)
As we saw, is a PID.
(b)
As we also saw, [x] is not a PID, since I = (x,2015) for example is not principal.
(c)
It turns out that for a field k the ring k[x] is always a PID. For example, [x], [x], [x] are PID’s.

If you want to try and prove this, first prove an analog of Bezout’s lemma, which implies the result.

(d)
[x,y] is not a PID, because (x,y) is not principal.

4.9  Noetherian rings

Prototypical example for this section: [x1,x2,] is not Noetherian, but most reasonable rings are. In particular polynomial rings are. (Equivalently, only weirdos care about non-Noetherian rings).

If it’s too much to ask that an ideal is generated by one element, perhaps we can at least ask that our ideals are generated by finitely many elements. Unfortunately, in certain weird rings this is also not the case.

Example 4.9.1 (Non-Noetherian ring)
Consider the ring R = [x1,x2,x3,] which has infinitely many free variables. Then the ideal I = (x1,x2,) R cannot be written with a finite generating set.

Nonetheless, most “sane” rings we work in do have the property that their ideals are finitely generated. We now name such rings and give two equivalent definitions:

Proposition 4.9.2 (The equvialent definitions of a Noetherian ring)
For a ring R, the following are equivalent:

(a)
Every ideal I of R is finitely generated (i.e. can be written with a finite generating set).
(b)
There does not exist an infinite ascending chain of ideals
I1 ⊊ I2 ⊊ I3 ⊊ ....

The absence of such chains is often called the ascending chain condition.

Such rings are called Noetherian.

Example 4.9.3 (Non-Noetherian ring breaks ACC)
In the ring R = [x1,x2,x3,] we have an infinite ascending chain

(x1) ⊊ (x1,x2) ⊊ (x1,x2,x3) ⊊ ....

From the example, you can kind of see why the proposition is true: from an infinitely generated ideal you can extract an ascending chain by throwing elements in one at a time. I’ll leave the proof to you if you want to do it.3

Question 4.9.4. Why are fields Noetherian? Why are PID’s (such as ) Noetherian?

This leaves the question: is our prototypical non-example of a PID, [x], a Noetherian ring? The answer is a glorious yes, according to the celebrated Hilbert basis theorem.

Theorem 4.9.5 (Hilbert basis theorem)
Given a Noetherian ring R, the ring R[x] is also Noetherian. Thus by induction, R[x1,x2,,xn] is Noetherian for any integer n.

The proof of this theorem is really olympiad flavored, so I couldn’t possibly spoil it – I’ve left it as a problem at the end of this chapter.

Noetherian rings really shine in algebraic geometry, and it’s a bit hard for me to motivate them right now, other than to say “most rings you’ll encounter are Noetherian”. Please bear with me!

4.10  A few harder problems to think about

Problem 4A. The ring R = [x](x2 + 1) is one that you’ve seen before. What is its name?

Problem 4B. Show that [x](x2 x)∼=× .

Problem 4C. In the ring , let I = (2016) and J = (30). Show that I J is an ideal of and compute its elements.

Problem 4D. Let R be a ring and I an ideal. Find an inclusion-preserving bijection between

Problem 4E. Let R be a ring.

(a)
Prove that there is exactly one ring homomorphism R.
(b)
Prove that the number of ring homomorphisms [x] R is equal to the number of elements of R.

Problem 4F.       PICProve the Hilbert basis theorem, ?? .

Problem 4G (USA Team Selection Test 2016). Let 𝔽p denote the integers modulo a fixed prime number p. Define Ψ: 𝔽p[x] 𝔽p[x] by

  ( ∑n     )   ∑n
Ψ      aixi  =    aixpi.
    i=0         i=0

Let S denote the image of Ψ.

(a)
Show that S is a ring with addition given by polynomial addition, and multiplication given by function composition.
(b)
Prove that Ψ: 𝔽p[x] S is then a ring isomorphism.

Problem 4H.     PICPICLet A B C be rings. Suppose C is a finitely generated A-module. Does it follow that B is a finitely generated A-module?

5  Flavors of rings

We continue our exploration of rings by considering some nice-ness properties that rings or ideals can satisfy, which will be valuable later on. As before, number theory is interlaced as motivation. I guess I can tell you at the outset what the completed table is going to look like, so you know what to expect.

Ring noun Ideal adjective Relation



PID principal R is a PID R is an integral domain,
and every I is principal
Noetherian ringfinitely generatedR is Noetherian every I is fin. gen.
field maximal R∕I is a field I is maximal
integral domainprime R∕I is an integral domain I is prime

5.1  Fields

Prototypical example for this section: is a field, but is not.

We already saw this definition last chapter: a field K is a nontrivial ring for which every nonzero element is a unit.

In particular, there are only two ideals in a field: the ideal (0), which is maximal, and the entire field K.

5.2  Integral domains

Prototypical example for this section: is an integral domain.

In practice, we are often not so lucky that we have a full-fledged field. Now it would be nice if we could still conclude the zero product property: if ab = 0 then either a = 0 or b = 0. If our ring is a field, this is true: if b0, then we can multiply by b1 to get a = 0. But many other rings we consider like and [x] also have this property, despite not having division.

Not all rings though: in 15,

3 ⋅5 ≡ 0  (mod 15 ).

If a,b0 but ab = 0 then we say a and b are zero divisors of the ring R. So we give a name to such rings.

Definition 5.2.1. A nontrivial ring with no zero divisors is called an integral domain.1

Question 5.2.2. Show that a field is an integral domain.

Exercise 5.2.3 (Cancellation in integral domains). Suppose ac = bc in an integral domain, and c0. Show that that a = b. (There is no c1 to multiply by, so you have to use the definition.)

Example 5.2.4 (Examples of integral domains)
Every field is an integral domain, so all the previous examples apply. In addition:

(a)
is an integral domain, but it is not a field.
(b)
[x] is not a field, since there is no polynomial P(x) with xP(x) = 1. However, [x] is an integral domain, because if P(x)Q(x) = 0 then one of P or Q is zero.
(c)
[x] is also an example of an integral domain. In fact, R[x] is an integral domain for any integral domain R (why?).
(d)
∕nis a field (hence integral domain) exactly when n is prime. When n is not prime, it is a ring but not an integral domain.

The trivial ring 0 is not considered an integral domain.

At this point, we go ahead and say:

Definition 5.2.5. An integral domain where all ideals are principal is called a principal ideal domain (PID).

The ring 6is an example of a ring which is a principal ideal ring, but not an integral domain. As we alluded to earlier, we will never really use “principal ideal ring” in any real way: we typically will want to strengthen it to PID.

5.3  Prime ideals

Prototypical example for this section: (5) is a prime ideal of .

We know that every integer can be factored (up to sign) as a unique product of primes; for example 15 = 3 5 and 10 = 2 5. You might remember the proof involves the so-called Bézout’s lemma, which essentially says that (a,b) = (gcd(a,b)); in other words we’ve carefully used the fact that is a PID.

It turns out that for general rings, the situation is not as nice as factoring elements because most rings are not PID’s. The classic example of something going wrong is

           (   √ ---)(   √ ---)
6 = 2 ⋅3 =  1−   − 5  1+   − 5

in [ ---
√− 5]. Nonetheless, we can sidestep the issue and talk about factoring ideals: somehow the example 10 = 2 5 should be (10) = (2) (5), which says “every multiple of 10 is the product of a multiple of 2 and a multiple of 5”. I’d have to tell you then how to multiply two ideals, which I do in the chapter on unique factorization.

Let’s at least figure out what primes are. In , we have that p1 is prime if whenever pxy, either px or py. We port over this definition to our world of ideals.

Definition 5.3.1. A proper ideal I R is a prime ideal if whenever xy I, either x I or y I.

The condition that I is proper is analogous to the fact that we don’t consider 1 to be a prime number.

Example 5.3.2 (Examples and non-examples of prime ideals)

(a)
The ideal (7) of is prime.
(b)
The ideal (8) of is not prime, since 2 4 = 8.
(c)
The ideal (x) of [x] is prime.
(d)
The ideal (x2) of [x] is not prime, since x x = x2.
(e)
The ideal (3,x) of [x] is prime. This is actually easiest to see using ??  below.
(f)
The ideal (5) = 5+5iof [i] is not prime, since the elements 3+i and 3i have product 10 (5), yet neither is itself in (5).

Remark 5.3.3 — Ideals have the nice property that they get rid of “sign issues”. For example, in , do we consider 3 to be a prime? When phrased with ideals, this annoyance goes away: (3) = (3). More generally, for a ring R, talking about ideals lets us ignore multiplication by a unit. (Note that 1 is a unit in .)

Exercise 5.3.4. What do you call a ring R for which the zero ideal (0) is prime?

We also have:

Theorem 5.3.5 (Prime ideal quotient is integral domain)
An ideal I is prime if and only if R∕I is an integral domain.

Exercise 5.3.6 (Mandatory). Convince yourself the theorem is true; it is just definition chasing. (A possible start is to consider R = and I = (15).)

I now must regrettably inform you that unique factorization is still not true even with the notion of a “prime” ideal (though again I haven’t told you how to multiply two ideals yet). But it will become true with some additional assumptions that will arise in algebraic number theory (relevant buzzword: Dedekind domain).

5.4  Maximal ideals

Prototypical example for this section: The ideal (x,5) is maximal in [x], by quotient-ing.

Here’s another flavor of an ideal.

Definition 5.4.1. A proper ideal I of a ring R is maximal if it is not contained in any other proper ideal.

Example 5.4.2 (Examples of maximal ideals)

(a)
The ideal I = (7) of is maximal, because if an ideal J contains 7 and an element n not in I it must contain gcd(7,n) = 1, and hence J = .
(b)
The ideal (x) is not maximal in [x], because it’s contained in (x,5) (among others).
(c)
On the other hand, (x,5) is indeed maximal in [x]. This is actually easiest to verify using ??  below.
(d)
Also, (x) is maximal in [x], again appealing to ??  below.

Exercise 5.4.3. What do you call a ring R for which the zero ideal (0) is maximal?

There’s an analogous theorem to the one for prime ideals.

Theorem 5.4.4 (I maximal R∕I field)
An ideal I is maximal if and only if R∕I is a field.

Proof. A ring is a field if and only if (0) is the only maximal ideal. So this follows by ?? . □

Corollary 5.4.5 (Maximal ideals are prime)
If I is a maximal ideal of a ring R, then I is prime.

Proof. If I is maximal, then R∕I is a field, hence an integral domain, so I is prime. □

In practice, because modding out by generated ideals is pretty convenient, this is a very efficient way to check whether an ideal is maximal.

Example 5.4.6 (Modding out in [x])

(a)
This instantly implies that (x,5) is a maximal ideal in [x], because if we mod out by x and 5 in [x], we just get 𝔽5, which is a field.
(b)
On the other hand, modding out by just x gives , which is an integral domain but not a field; that’s why (x) is prime but not maximal.

As we saw, any maximal ideal is prime. But now note that has the special property that all of its nonzero prime ideals are also maximal. It’s with this condition and a few other minor conditions that you get a so-called Dedekind domain where prime factorization of ideals does work. More on that later.

5.5  Field of fractions

Prototypical example for this section: Frac() = .

As long as we are here, we take the time to introduce a useful construction that turns any integral domain into a field.

Definition 5.5.1. Given an integral domain R, we define its field of fractions or fraction field Frac(R) as follows: it consists of elements a∕b, where a,b R and b0. We set a∕b c∕d if and only if bc = ad. Addition and multiplication is defined by

a-
 b + c-
d = ad-+-bc
  bd
a-
 b c-
d = ac-
bd.

In fact everything you know about basically carries over by analogy. You can prove if you want that this indeed a field, but considering how comfortable we are that is well-defined, I wouldn’t worry about it…

Definition 5.5.2. Let k be a field. We define k(x) = Frac(k[x]) (read “k of x”), and call it the field of rational functions.

Example 5.5.3 (Examples of fraction fields)

(a)
By definition, Frac() = .
(b)
The field (x) consists of rational functions in x:
       { f(x)            }
ℝ (x) =  -----| f,g ∈ ℝ[x] .
         g(x)

For example, x22−x3 might be a typical element.

Example 5.5.4 (Gaussian rationals)
Just like we defined [i] by abusing notation, we can also write (i) = Frac([i]). Officially, it should consist of

       {              }
ℚ (i) =   f(i)| g(i) ⁄= 0
         g(i)

for polynomials f and g with rational coefficients. But since i2 = 1 this just leads to

      {                                }
ℚ(i) =  a-+-bi| a,b,c,d ∈ ℚ,(c,d) ⁄= (0,0) .
        c + di

And since c1+di = cc2−+ddi2- we end up with

ℚ(i) = {a + bi | a,b ∈ ℚ }.

5.6  Unique factorization domains (UFD’s)

Prototypical example for this section: and polynomial rings in general.

Here is one stray definition that will be important for those with a number-theoretic inclination. Over the positive integers, we have a fundamental theorem of arithmetic, stating that every integer is uniquely the product of prime numbers.

We can even make an analogous statement in or [i], if we allow representations like 6 = (2)(3) and so on. The trick is that we only consider everything up to units; so 6 = (2)(3) = 2 3 are considered the same.

The general definition goes as follows.

Definition 5.6.1. A nonzero non-unit of an integral domain R is irreducible if it cannot be written as the product of two non-units.

An integral domain R is a unique factorization domain if every nonzero non-unit of R can be written as the product of irreducible elements, which is unique up to multiplication by units.

Question 5.6.2. Verify that is a UFD.

Example 5.6.3 (Examples of UFD’s)

(a)
Fields are a “degenerate” example of UFD’s: every nonzero element is a unit, so there is nothing to check.
(b)
is a UFD. The irreducible elements are p and p, for example 5 or 17.
(c)
[x] is a UFD: polynomials with rational coefficients can be uniquely factored, up to scaling by constants (as the units of [x] are just the rational numbers).
(d)
[x] is a UFD.
(e)
The Gaussian integers [i] turns out to be a UFD too (and this will be proved in the chapters on algebraic number theory).
(f)
[√ ---
  − 5] is the classic non-example of a UFD: one may write
           (   √ ---)(   √ ---)
6 = 2 ⋅3 =  1−   − 5  1+   − 5

but each of 2, 3, 1 ±√ − 5 is irreducible. (It turns out the right way to fix this is by considering prime ideals instead, and this is one big motivation for ?? .)

(g)
Theorem we won’t prove: every PID is a UFD.
(h)
Theorem we won’t prove: if R is a UFD, so is R[x] (and hence by induction so is R[x,y], R[x,y,z], …).

5.7  A few harder problems to think about

Not olympiad problems, but again the spirit is very close to what you might see in an olympiad.

Problem 5A. Consider the ring

  √--   {     √ --       }
ℚ[ 2 ] = a + b  2 | a,b ∈ ℚ .

Is it a field?

Problem 5B (Homomorphisms from fields are injective). Let K be a field and R a ring. Prove that any homomorphism ψ: K R is injective.2

Problem 5C (Pre-image of prime ideals). Suppose ϕ: R S is a ring homomorphism, and I S is a prime ideal. Prove that ϕpre(I) is prime as well.

Problem 5D.       PICLet R be an integral domain with finitely many elements. Prove that R is a field.

Problem 5E (Krull’s theorem). Let R be a ring and J a proper ideal.

(a)
Prove that if R is Noetherian, then J is contained in a maximal ideal I.
(b)
Use Zorn’s lemma (?? ) to prove the result even if R isn’t Noetherian.

Problem 5F (Speck[x]). Describe the prime ideals of [x] and [x].

Problem 5G. Prove that any nonzero prime ideal of [√ --
  2] is also a maximal ideal.

Part III
Basic Topology

6  Properties of metric spaces

At the end of the last chapter on metric spaces, we introduced two adjectives “open” and “closed”. These are important because they’ll grow up to be the definition for a general topological space, once we graduate from metric spaces.

To move forward, we provide a couple niceness adjectives that applies to entire metric spaces, rather than just a set relative to a parent space. They are “(totally) bounded” and “complete”. These adjectives are specific to metric spaces, but will grow up to become the notion of compactness, which is, in the words of [?], “the single most important concept in real analysis”. At the end of the chapter, we will know enough to realize that something is amiss with our definition of homeomorphism, and this will serve as the starting point for the next chapter, when we define fully general topological spaces.

6.1  Boundedness

Prototypical example for this section: [0,1] is bounded but is not.

Here is one notion of how to prevent a metric space from being a bit too large.

Definition 6.1.1. A metric space M is bounded if there is a constant D such that d(p,q) D for all p,q M.

You can change the order of the quantifiers:

Proposition 6.1.2 (Boundedness with radii instead of diameters)
A metric space M is bounded if and only if for every point p M, there is a radius R (possibly depending on p) such that d(p,q) R for all q M.

Exercise 6.1.3. Use the triangle inequality to show these are equivalent. (The names “radius” and “diameter” are a big hint!)

Example 6.1.4 (Examples of bounded spaces)

(a)
Finite intervals like [0,1] and (a,b) are bounded.
(b)
The unit square [0,1]2 is bounded.
(c)
n is not bounded for any n 1.
(d)
A discrete space on an infinite set is bounded.
(e)
is not bounded, despite being homeomorphic to the discrete space!

The fact that a discrete space on an infinite set is “bounded” might be upsetting to you, so here is a somewhat stronger condition you can use:

Definition 6.1.5. A metric space is totally bounded if for any 𝜀 > 0, we can cover M with finitely many 𝜀-neighborhoods.

For example, if 𝜀 = 12, you can cover [0,1]2 by 𝜀-neighborhoods.

Exercise 6.1.6. Show that “totally bounded” implies “bounded”.

Example 6.1.7 (Examples of totally bounded spaces)

(a)
A subset of n is bounded if and only if it is totally bounded.

This is for Euclidean geometry reasons: for example in 2 if I can cover a set by a single disk of radius 2, then I can certainly cover it by finitely many disks of radius 12. (We won’t prove this rigorously.)

(b)
So for example [0,1] or [0,2] × [0,3] is totally bounded.
(c)
In contrast, a discrete space on an infinite set is not totally bounded.

6.2  Completeness

Prototypical example for this section: is complete, but and (0,1) are not.

So far we can only talk about sequences converging if they have a limit. But consider the sequence

x1 = 1, x2 = 1.4, x3 = 1.41, x4 = 1.414, ....

It converges to √--
 2 in , of course. But it fails to converge in ; there is no rational number this sequence converges to. And so somehow, if we didn’t know about the existence of , we would have no idea that the sequence (xn) is “approaching” something.

That seems to be a shame. Let’s set up a new definition to describe these sequences whose terms get close to each other, even if they don’t approach any particular point in the space. Thus, we only want to mention the given points in the definition.

Definition 6.2.1. Let x1,x2, be a sequence which lives in a metric space M = (M,dM). We say the sequence is Cauchy if for any 𝜀 > 0, we have

dM(xm, xn) < 𝜀

for all sufficiently large m and n.

Question 6.2.2. Show that a sequence which converges is automatically Cauchy. (Draw a picture.)

Now we can define:

Definition 6.2.3. A metric space M is complete if every Cauchy sequence converges.

Example 6.2.4 (Examples of complete spaces)

(a)
is complete. (Depending on your definition of , this either follows by definition, or requires some work. We won’t go through this here.)
(b)
The discrete space is complete, as the only Cauchy sequences are eventually constant.
(c)
The closed interval [0,1] is complete.
(d)
n is complete as well. (You’re welcome to prove this by induction on n.)

Example 6.2.5 (Non-examples of complete spaces)

(a)
The rationals are not complete.
(b)
The open interval (0,1) is not complete, as the sequence 0.9, 0.99, 0.999, 0.9999, …is Cauchy but does not converge.

So, metric spaces need not be complete, like . But we certainly would like them to be complete, and in light of the following theorem this is not unreasonable.

Theorem 6.2.6 (Completion)
Every metric space can be “completed”, i.e. made into a complete space by adding in some points.

We won’t need this construction at all, so it’s left as ?? .

Example 6.2.7 (completes to )
The completion of is .

(In fact, by using a modified definition of completion not depending on the real numbers, other authors often use this as the definition of .)

6.3  Let the buyer beware

There is something suspicious about both these notions: neither are preserved under homeomorphism!

Example 6.3.1 (Something fishy is going on here)
Let M = (0,1) and N = . As we saw much earlier M and N are homeomorphic. However:

This is the first hint of something going awry with the metric. As we progress further into our study of topology, we will see that in fact open and closed sets (which we motivated by using the metric) are the notion that will really shine later on. I insist on introducing the metric first so that the standard pictures of open and closed sets make sense, but eventually it becomes time to remove the training wheels.

6.4  Subspaces, and (inb4) a confusing linguistic point

Prototypical example for this section: A circle is obtained as a subspace of 2.

As we’ve already been doing implicitly in examples, we’ll now say:

Definition 6.4.1. Every subset S M is a metric space in its own right, by re-using the distance function on M. We say that S is a subspace of M.

For example, we saw that the circle S1 is just a subspace of 2.

It thus becomes important to distinguish between

(i)
“absolute” adjectives like “complete” or “bounded”, which can be applied to both spaces, and hence even to subsets of spaces (by taking a subspace), and
(ii)
“relative” adjectives like “open (in M)” and “closed (in M)”, which make sense only relative to a space, even though people are often sloppy and omit them.

So “[0,1] is complete” makes sense, as does “[0,1] is a complete subset of ”, which we take to mean “[0,1] is a complete as a subspace of ”. This is since “complete” is an absolute adjective.

But here are some examples of ways in which relative adjectives require a little more care:

To make sure you understand the above, here are two exercises to help you practice relative adjectives.

Exercise 6.4.2. Let M be a complete metric space and let S M. Prove that S is complete if and only if it is closed in M. In particular, [0,1] is complete.

Exercise 6.4.3. Let M = [0,1] (2,3). Show that [0,1] and (2,3) are both open and closed in M.

This illustrates a third point: a nontrivial set can be both open and closed1 As we’ll see in ?? , this implies the space is disconnected; i.e. the only examples look quite like the one we’ve given above.

6.5  A few harder problems to think about

Problem 6A (Banach fixed point theorem). Let M = (M,d) be a complete metric space. Suppose T : M M is a continuous map such that for any pq M,

d(T(p),T(q)) < 0.999d (p,q).

(We call T a contraction.) Show that T has a unique fixed point.

Problem 6B (Henning Makholm, on math.SE). We let M and N denote the metric spaces obtained by equipping with the following two metrics:

dM(x,y) = min{1,|x− y|}
dN(x,y) =   x   y
|e − e |.

(a)
Fill in the following 2 × 3 table with “yes” or “no” for each cell.

Complete?Bounded?Totally bounded?




M
N
(b)
Are M and N homeomorphic?

Problem 6C (Completion of a metric space).     PICPIC Let M be a metric space. Construct a complete metric space M such that M is a subspace of M, and every open set of M contains a point of M (meaning M is dense in M).

Problem 6D. Show that a metric space is totally bounded if and only if any sequence has a Cauchy subsequence.

Problem 6E.   PICPICPICProve that is not homeomorphic to any complete metric space.

7  Topological spaces

In ??  we introduced the notion of space by describing metrics on them. This gives you a lot examples, and nice intuition, and tells you how you should draw pictures of open and closed sets.

However, moving forward, it will be useful to begin thinking about topological spaces in terms of just their open sets. (One motivation is that our fishy ??  shows that in some ways the notion of homeomorphism really wants to be phrased in terms of open sets, not in terms of the metric.) As we are going to see, the open sets manage to actually retain nearly all the information we need, but are simpler.1 This will be done in just a few sections, and after that we will start describing more adjectives that we can apply to topological (and hence metric) spaces.

The most important topological notion is missing from this chapter: that of a compact space. It is so important that I have dedicated a separate chapter just for it.

Quick note for those who care: the adjectives “Hausdorff”, “connected”, and later “compact” are all absolute adjectives.

7.1  Forgetting the metric

Recall ?? :

A function f : M N of metric spaces is continuous if and only if the pre-image of every open set in N is open in M.

Despite us having defined this in the context of metric spaces, this nicely doesn’t refer to the metric at all, only the open sets. As alluded to at the start of this chapter, this is a great motivation for how we can forget about the fact that we had a metric to begin with, and rather start with the open sets instead.

Definition 7.1.1. A topological space is a pair (X,𝒯 ), where X is a set of points, and 𝒯 is the topology, which consists of several subsets of X, called the open sets of X. The topology must obey the following axioms.

So this time, the open sets are given. Rather than defining a metric and getting open sets from the metric, we instead start from just the open sets.

Abuse of Notation 7.1.2. We abbreviate (X,𝒯 ) by just X, leaving the topology 𝒯 implicit. (Do you see a pattern here?)

Example 7.1.3 (Examples of topologies)

(a)
Given a metric space M, we can let 𝒯 be the open sets in the metric sense. The point is that the axioms are satisfied.
(b)
In particular, discrete space is a topological space in which every set is open. (Why?)
(c)
Given X, we can let 𝒯 = {∅,X }, the opposite extreme of the discrete space.

Now we can port over our metric definitions.

Definition 7.1.4. An open neighborhood2 of a point x X is an open set U which contains x (see figure).

Abuse of Notation 7.1.5. Just to be perfectly clear: by an “open neighborhood” I mean any open set containing x. But by an “r-neighborhood” I always mean the points with distance less than r from x, and so I can only use this term if my space is a metric space.

7.2  Re-definitions

Now that we’ve defined a topological space, for nearly all of our metric notions we can write down as the definition the one that required only open sets (which will of course agree with our old definitions when we have a metric space).

7.2.i  Continuity

Here was our motivating example, continuity:

Definition 7.2.1. We say function f : X Y of topological spaces is continuous at a point p X if the pre-image of any open neighborhood of f(p) is an open neighborhood of p. The function is continuous if it is continuous at every point.

Thus homeomorphisms carries over: a bijection which is continuous in both directions.

Definition 7.2.2. A homeomorphism of topological spaces (X,τX) and (Y,τY ) is a bijection f : X Y which induces a bijection from τX to τY : i.e. the bijection preserves open sets.

Question 7.2.3. Show that this is equivalent to f and its inverse both being continuous.

Therefore, any property defined only in terms of open sets is preserved by homeomorphism. Such a property is called a topological property. However, the later adjectives we define (“connected”, “Hausdorff”, “compact”) will all be defined only in terms of the open sets, so they will be.

7.2.ii  Closed sets

We saw last time there were two equivalent definitions for closed sets, but one of them relies only on open sets, and we use it:

Definition 7.2.4. In a general topological space X, we say that S X is closed in X if the complement X S is open in X.

If S X is any set, the closure of S, denoted S, is defined as the smallest closed set containing S.

Thus for general topological spaces, open and closed sets carry the same information, and it is entirely a matter of taste whether we define everything in terms of open sets or closed sets. In particular, you can translate axioms and properties of open sets to closed ones:

Question 7.2.5. Show that the (possibly infinite) intersection of closed sets is closed while the union of finitely many closed sets is closed. (Look at complements.)

Exercise 7.2.6. Show that a function is continuous if and only if the pre-image of every closed set is closed.

Mathematicians seem to have agreed that they like open sets better.

7.2.iii  Properties that don’t carry over

Not everything works:

Remark 7.2.7 (Complete and (totally) bounded are metric properties) The two metric properties we have seen, “complete” and “(totally) bounded”, are not topological properties. They rely on a metric, so as written we cannot apply them to topological spaces. One might hope that maybe, there is some alternate definition (like we saw for “continuous function”) that is just open-set based. But ??  showing (0,1)∼=tells us that it is hopeless.

Remark 7.2.8 (Sequences don’t work well) You could also try to port over the notion of sequences and convergent sequences. However, this turns out to break a lot of desirable properties. Therefore I won’t bother to do so, and thus if we are discussing sequences you should assume that we are working with a metric space.

7.3  Hausdorff spaces

Prototypical example for this section: Every space that’s not the Zariski topology (defined much later).

As you might have guessed, there exist topological spaces which cannot be realized as metric spaces (in other words, are not metrizable). One example is just to take X = {a,b,c} and the topology τX = {∅,{a,b,c}}. This topology is fairly “stupid”: it can’t tell apart any of the points a, b, c! But any metric space can tell its points apart (because d(x,y) > 0 when xy).

We’ll see less trivial examples later, but for now we want to add a little more sanity condition onto our spaces. There is a whole hierarchy of such axioms, labelled Tn for integers n (with n = 0 being the weakest and n = 6 the strongest); these axioms are called separation axioms.

By far the most common hypothesis is the T2 axiom, which bears a special name.

Definition 7.3.1. A topological space X is Hausdorff if for any two distinct points p and q in X, there exists an open neighborhood U of p and an open neighborhood V of q such that

U ∩ V  = ∅.

In other words, around any two distinct points we should be able to draw disjoint open neighborhoods. Here’s a picture to go with above, but not much going on.

Question 7.3.2. Show that all metric spaces are Hausdorff.

I just want to define this here so that I can use this word later. In any case, basically any space we will encounter other than the Zariski topology is Hausdorff.

7.4  Subspaces

Prototypical example for this section: S1 is a subspace of 2.

One can also take subspaces of general topological spaces.

Definition 7.4.1. Given a topological space X, and a subset S X, we can make S into a topological space by declaring that the open subsets of S are U S for open U X. This is called the subspace topology.

So for example, if we view S1 as a subspace of 2, then any open arc is an open set, because you can view it as the intersection of an open disk with S1.

Needless to say, for metric spaces it doesn’t matter which of these definitions I choose. (Proving this turns out to be surprisingly annoying, so I won’t do so.)

7.5  Connected spaces

Prototypical example for this section: [0,1] [2,3] is disconnected.

Even in metric spaces, it is possible for a set to be both open and closed.

Definition 7.5.1. A subset S of a topological space X is clopen if it is both closed and open in X. (Equivalently, both S and its complement are open.)

For example and the entire space are examples of clopen sets. In fact, the presence of a nontrivial clopen set other than these two leads to a so-called disconnected space.

Question 7.5.2. Show that a space X has a nontrivial clopen set (one other than and X) if and only if X can be written as a disjoint union of two nonempty open sets.

We say X is disconnected if there are nontrivial clopen sets, and connected otherwise. To see why this should be a reasonable definition, it might help to solve ?? .

Example 7.5.3 (Disconnected and connected spaces)

(a)
The metric space
{(x,y) | x2 + y2 ≤ 1} ∪ {(x, y) | (x − 4)2 + y2 ≤ 1} ⊆ ℝ2

is disconnected (it consists of two disks).

(b)
The space [0,1] [2,3] is disconnected: it consists of two segments, each of which is a clopen set.
(c)
A discrete space on more than one point is disconnected, since every set is clopen in the discrete space.
(d)
Convince yourself that the set
{                 }
  x ∈ ℚ : x2 < 2014

is a clopen subset of . Hence is disconnected too – it has gaps.

(e)
[0,1] is connected.

7.6  Path-connected spaces

Prototypical example for this section: Walking around in .

A stronger and perhaps more intuitive notion of a connected space is a path-connected space. The short description: “walk around in the space”.

Definition 7.6.1. A path in the space X is a continuous function

γ : [0,1] → X.

Its endpoints are the two points γ(0) and γ(1).

You can think of [0,1] as measuring “time”, and so we’ll often write γ(t) for t [0,1] (with t standing for “time”). Here’s a picture of a path.

Question 7.6.2. Why does this agree with your intuitive notion of what a “path” is?

Definition 7.6.3. A space X is path-connected if any two points in it are connected by some path.

Exercise 7.6.4 (Path-connected implies connected). Let X = U V be a disconnected space. Show that there is no path from a point of U to point V . (If γ: [0,1] X, then we get [0,1] = γpre(U) γpre(V ), but [0,1] is connected.)

Example 7.6.5 (Examples of path-connected spaces)

7.7  Homotopy and simply connected spaces

Prototypical example for this section: and ∖{0}.

Now let’s motivate the idea of homotopy. Consider the example of the complex plane (which you can think of just as 2) with two points p and q. There’s a whole bunch of paths from p to q but somehow they’re not very different from one another. If I told you “walk from p to q” you wouldn’t have too many questions.

So we’re living happily in until a meteor strikes the origin, blowing it out of existence. Then suddenly to get from p to q, people might tell you two different things: “go left around the meteor” or “go right around the meteor”.

So what’s happening? In the first picture, the red, green, and blue paths somehow all looked the same: if you imagine them as pieces of elastic string pinned down at p and q, you can stretch each one to any other one.

But in the second picture, you can’t move the red string to match with the blue string: there’s a meteor in the way. The paths are actually different.3

The formal notion we’ll use to capture this is homotopy equivalence. We want to write a definition such that in the first picture, the three paths are all homotopic, but the two paths in the second picture are somehow not homotopic. And the idea is just continuous deformation.

Definition 7.7.1. Let α and β be paths in X whose endpoints coincide. A (path) homotopy from α to β is a continuous function F : [0,1]2 X, which we’ll write Fs(t) for s,t [0,1], such that

F0(t) = α (t) and F1(t) = β (t) for all t ∈ [0,1]

and moreover

α (0) = β(0) = Fs(0) and α(1) = β (1) = Fs(1) for all s ∈ [0,1].

If a path homotopy exists, we say α and β are path homotopic and write α β.

Abuse of Notation 7.7.2. While I strictly should say “path homotopy” to describe this relation between two paths, I will shorten this to just “homotopy” instead. Similarly I will shorten “path homotopic” to “homotopic”.

Animated picture: https://commons.wikimedia.org/wiki/File:HomotopySmall.gif. Needless to say, is an equivalence relation.

What this definition is doing is taking α and “continuously deforming” it to β, while keeping the endpoints fixed. Note that for each particular s, Fs is itself a function. So s represents time as we deform α to β: it goes from 0 to 1, starting at α and ending at β.

Question 7.7.3. Convince yourself the above definition is right. What goes wrong when the meteor strikes?

So now I can tell you what makes special:

Definition 7.7.4. A space X is simply connected if it’s path-connected and for any points p and q, all paths from p to q are homotopic.

That’s why you don’t ask questions when walking from p to q in : there’s really only one way to walk. Hence the term “simply” connected.

Question 7.7.5. Convince yourself that n is simply connected for all n.

7.8  Bases of spaces

Prototypical example for this section: has a basis of open intervals, and 2 has a basis of open disks.

You might have noticed that the open sets of are a little annoying to describe: the prototypical example of an open set is (0,1), but there are other open sets like

       (  3 )  (   7)
(0,1) ∪  1,--  ∪  2,-- ∪ (2014,2015).
          2        3

Question 7.8.1. Check this is an open set.

But okay, this isn’t that different. All I’ve done is taken a bunch of my prototypes and threw a bunch of signs at it. And that’s the idea behind a basis.

Definition 7.8.2. A basis for a topological space X is a subset of the open sets such that every open set in X is a union of some (possibly infinite) number of elements in .

And all we’re doing is saying

Example 7.8.3 (Basis of )
The open intervals form a basis of .

In fact, more generally we have:

Theorem 7.8.4 (Basis of metric spaces)
The r-neighborhoods form a basis of any metric space M.

Proof. Kind of silly – given an open set U draw an rp-neighborhood Up contained entirely inside U. Then pUp is contained in U and covers every point inside it. □

Hence, an open set in 2 is nothing more than a union of a bunch of open disks, and so on. The point is that in a metric space, the only open sets you really ever have to worry too much about are the r-neighborhoods.

7.9  A few harder problems to think about

Problem 7A. Let X be a topological space. Show that there exists a nonconstant continuous function X →{0,1} if and only if X is disconnected (here {0,1} is given the discrete topology).

Problem 7B. Let X and Y be topological spaces and let f : X Y be a continuous function.

(a)
Show that if X is connected then so is fimg(X).
(b)
Show that if X is path-connected then so is fimg(X).

Problem 7C (Hausdorff implies T1 axiom). Let X be a Hausdorff topological space. Prove that for any point p X the set {p} is closed.

Problem 7D ([?], Exercise 2.56). Let M be a metric space with more than one point but at most countably infinitely many points. Show that M is disconnected.

Problem 7E (Furstenberg). We declare a subset of to be open if it’s the union (possibly empty or infinite) of arithmetic sequences {a + nd | n ∈ ℤ}, where a and d are positive integers.

(a)
Verify this forms a topology on , called the evenly spaced integer topology.
(b)
Prove there are infinitely many primes by considering ppfor primes p.

Problem 7F.       PICProve that the evenly spaced integer topology on is metrizable. In other words, show that one can impose a metric d : 2 which makes into a metric space whose open sets are those described above.

Problem 7G.       PICWe know that any open set U is a union of open intervals (allowing ±∞ as endpoints). One can show that it’s actually possible to write U as the union of pairwise disjoint open intervals.4 Prove that there exists such a disjoint union with at most countably many intervals in it.

8  Compactness

One of the most important notions of topological spaces is that of compactness. It generalizes the notion of “closed and bounded” in Euclidean space to any topological space (e.g. see ?? ).

For metric spaces, there are two equivalent ways of formulating compactness:

As I alluded to earlier, sequences in metric spaces are super nice, but sequences in general topological spaces suck (to the point where I didn’t bother to define convergence of general sequences). So it’s the second definition that will be used for general spaces.

8.1  Definition of sequential compactness

Prototypical example for this section: [0,1] is compact, but (0,1) is not.

To emphasize, compactness is one of the best possible properties that a metric space can have.

Definition 8.1.1. A subsequence of an infinite sequence x1,x2, is exactly what it sounds like: a sequence xi1,xi2, where i1 < i2 < ⋅⋅⋅ are positive integers. Note that the sequence is required to be infinite.

Another way to think about this is “selecting infinitely many terms” or “deleting some terms” of the sequence, depending on whether your glass is half empty or half full.

Definition 8.1.2. A metric space M is sequentially compact if every sequence has a subsequence which converges.

This time, let me give some non-examples before the examples.

Example 8.1.3 (Non-examples of compact metric spaces)

(a)
The space is not compact: consider the sequence 1,2,3,4,. Any subsequence explodes, hence cannot possibly be compact.
(b)
More generally, if a space is not bounded it cannot be compact. (You can prove this if you want.)
(c)
The open interval (0,1) is bounded but not compact: consider the sequence 12,13,14,. No subsequence can converge to a point in (0,1) because the sequence “converges to 0”.
(d)
More generally, any space which is not complete cannot be compact.

Now for the examples!

Question 8.1.4. Show that a finite set is compact. (Pigeonhole Principle.)

Example 8.1.5 (Examples of compact spaces)
Here are some more examples of compact spaces. I’ll prove they’re compact in just a moment; for now just convince yourself they are.

(a)
[0,1] is compact. Convince yourself of this! Imagine having a large number of dots in the unit interval…
(b)
The surface of a sphere, S2 = {          2   2    2    }
 (x,y,z) | x + y + z =  1 is compact.
(c)
The unit ball B2 = {        2    2    }
  (x, y) | x + y ≤ 1 is compact.
(d)
The Hawaiian earring living in 2 is compact: it consists of mutually tangent circles of radius 1
n for each n, as in ?? .

Figure 8.1:Hawaiian Earring.

To aid in generating more examples, we remark:

Proposition 8.1.6 (Closed subsets of compacts)
Closed subsets of sequentially compact sets are compact.

Question 8.1.7. Prove this. (It should follow easily from definitions.)

We need to do a bit more work for these examples, which we do in the next section.

8.2  Criteria for compactness

Theorem 8.2.1 (Tychonoff’s theorem)
If X and Y are compact spaces, then so is X ×Y .

Proof. ?? . □

We also have:

Theorem 8.2.2 (The interval is compact)
[0,1] is compact.

Proof. Killed by ?? ; however, here is a sketch of a direct proof. Split [0,1] into [0,1
2] [1
2,1]. By Pigeonhole, infinitely many terms of the sequence lie in the left half (say); let x1 be the first one and then keep only the terms in the left half after x1. Now split [0,12] into [0,14] [14,12]. Again, by Pigeonhole, infinitely many terms fall in some half; pick one of them, call it x2. Rinse and repeat. In this way we generate a sequence x1, x2, …which is Cauchy, implying that it converges since [0,1] is complete. □

Now we can prove the main theorem about Euclidean space: in n, compactness is equivalent to being “closed and bounded”.

Theorem 8.2.3 (Bolzano-Weierstraß)
A subset of n is compact if and only if it is closed and bounded.

Question 8.2.4. Why does this imply the spaces in our examples are compact?

Proof. Well, look at a closed and bounded S n. Since it’s bounded, it lives inside some box [a1,b1] × [a2,b2] ×⋅⋅⋅× [an,bn]. By Tychonoff’s theorem, since each [ai,bi] is compact the entire box is. Since S is a closed subset of this compact box, we’re done. □

One really has to work in n for this to be true! In other spaces, this criterion can easily fail.

Example 8.2.5 (Closed and bounded but not compact)
Let S = {s1,s2,} be any infinite set equipped with the discrete metric. Then S is closed (since all convergent sequences are constant sequences) and S is bounded (all points are a distance 1 from each other) but it’s certainly not compact since the sequence s1,s2, doesn’t converge.

The Bolzano-Weierstrass theorem, which is ?? , tells you exactly which sets are compact in metric spaces in a geometric way.

8.3  Compactness using open covers

Prototypical example for this section: [0,1] is compact.

There’s a second related notion of compactness which I’ll now define. The following definitions might appear very unmotivated, but bear with me.

Definition 8.3.1. An open cover of a topological space X is a collection of open sets {Uα} (possibly infinite or uncountable) which cover it: every point in X lies in at least one of the Uα, so that

     ⋃
X  =    Uα.

Such a cover is called an open cover.

A subcover is exactly what it sounds like: it takes only some of the Uα, while ensuring that X remains covered.

Some art:

Definition 8.3.2. A topological space X is quasicompact if every open cover has a finite subcover. It is compact if it is also Hausdorff.

Remark 8.3.3 — The “Hausdorff” hypothesis that I snuck in is a sanity condition which is not worth worrying about unless you’re working on the algebraic geometry chapters, since all the spaces you will deal with are Hausdorff. (In fact, some authors don’t even bother to include it.) For example all metric spaces are Hausdorff and thus this condition can be safely ignored if you are working with metric spaces.

What does this mean? Here’s an example:

Example 8.3.4 (Example of a finite subcover)
Suppose we cover the unit square M = [0,1]2 by putting an open disk of diameter 1 centered at every point (trimming any overflow). This is clearly an open cover because, well, every point lies in many of the open sets, and in particular is the center of one.

But this is way overkill – we only need about four of these circles to cover the whole square. That’s what is meant by a “finite subcover”.

Why do we care? Because of this:

Theorem 8.3.5 (Sequentially compact compact)
A metric space M is sequentially compact if and only if it is compact.

We defer the proof to the last section.

This gives us the motivation we wanted for our definition. Sequential compactness was a condition that made sense. The open-cover definition looked strange, but it turned out to be equivalent. But we now prefer it, because we have seen that whenever possible we want to resort to open-set-only based definitions: so that e.g. they are preserved under homeomorphism.

Example 8.3.6 (An example of non-compactness)
The space X = [0,1) is not compact in either sense. We can already see it is not sequentially compact, because it is not even complete (look at xn = 1 1n). To see it is not compact under the covering definition, consider the sets

     [            )
            ---1--
Um =   0,1− m  + 1

for m = 1,2,. Then X = Ui; hence the Ui are indeed a cover. But no finite collection of the Ui’s will cover X.

Question 8.3.7. Convince yourself that [0,1] is compact; this is a little less intuitive than it being sequentially compact.

Abuse of Notation 8.3.8. Thus, we’ll never call a metric space “sequentially compact” again — we’ll just say “compact”. (Indeed, I kind of already did this in the previous few sections.)

8.4  Applications of compactness

Compactness lets us reduce infinite open covers to finite ones. Actually, it lets us do this even if the open covers are blithely stupid. Very often one takes an open cover consisting of an open neighborhood of x X for every single point x in the space; this is a huge number of open sets, and yet compactness lets us reduce to a finite set.

To give an example of a typical usage:

Proposition 8.4.1 (Compact =⇒ totally bounded)
Let M be compact. Then M is totally bounded.

Proof using covers. For every point p M, take an 𝜀-neighborhood of p, say Up. These cover M for the horrendously stupid reason that each point p is at the very least covered by its open neighborhood Up. Compactness then lets us take a finite subcover. □

Next, an important result about maps between compact spaces.

Theorem 8.4.2 (Images of compacts are compact)
Let f : X Y be a continuous function, where X is compact. Then the image

fimg (X ) ⊆ Y

is compact.

Proof using covers. Take any open cover {V α} in Y of fimg(X). By continuity of f, it pulls back to an open cover {Uα} of X. Thus some finite subcover of this covers X. The corresponding V ’s cover fimg(X). □

Question 8.4.3. Give another proof using the sequential definitions of continuity and compactness. (This is even easier.)

Some nice corollaries of this:

Corollary 8.4.4 (Extreme value theorem)
Let X be compact and consider a continuous function f : X . Then f achieves a maximum value at some point, i.e. there is a point p X such that f(p) f(q) for any other q X.

Corollary 8.4.5 (Intermediate value theorem)
Consider a continuous function f : [0,1] . Then the image of f is of the form [a,b] for some real numbers a b.

Sketch of Proof. The point is that the image of f is compact in , and hence closed and bounded. You can convince yourself that the closed sets are just unions of closed intervals. That implies the extreme value theorem.

When X = [0,1], the image is also connected, so there should only be one closed interval in fimg([0,1]). Since the image is bounded, we then know it’s of the form [a,b]. (To give a full proof, you would use the so-called least upper bound property, but that’s a little involved for a bedtime story; also, I think is boring.) □

Example 8.4.6 (1∕x)
The compactness hypothesis is really important here. Otherwise, consider the function

(0,1) → ℝ   by   x ↦→  1.
                      x

This function (which you plot as a hyperbola) is not bounded; essentially, you can see graphically that the issue is we can’t extend it to a function on [0,1] because it explodes near x = 0.

One last application: if M is a compact metric space, then continuous functions f : M N are continuous in an especially “nice” way:

Definition 8.4.7. A function f : M N of metric spaces is called uniformly continuous if for any 𝜀 > 0, there exists a δ > 0 (depending only on 𝜀) such that whenever dM(x,y) < δ we also have dN(fx,fy) < 𝜀.

The name means that for 𝜀 > 0, we need a δ that works for every point of M.

Example 8.4.8 (Uniform continuity)

(a)
The functions to of the form x↦→ax + b are all uniformly continuous, since one can always take δ = 𝜀∕|a| (or δ = 1 if a = 0).
(b)
Actually, it is true that a differentiable function with a bounded derivative is uniformly continuous. (The converse is false for the reason that uniformly continuous doesn’t imply differentiable at all.)
(c)
The function f : by x↦→x2 is not uniformly continuous, since for large x, tiny δ changes to x lead to fairly large changes in x2. (If you like, you can try to prove this formally now.)

Think f(2017.01) f(2017) > 40; even when δ = 0.01, one can still cause large changes in f.

(d)
However, when restricted to (0,1) or [0,1] the function x↦→x2 becomes uniformly continuous. (For 𝜀 > 0 one can now pick for example δ = min{1,𝜀}3.)
(e)
The function (0,1) by x↦→1∕x is not uniformly continuous (same reason as before).

Now, as promised:

Proposition 8.4.9 (Continuous on compact =⇒ uniformly continuous)
If M is compact and f : M N is continuous, then f is uniformly continuous.

Proof using sequences. Fix 𝜀 > 0, and assume for contradiction that for every δ = 1∕k there exists points xk and yk within δ of each other but with images 𝜀 > 0 apart. By compactness, take a convergent subsequence xik p. Then yik p as well, since the xk’s and yk’s are close to each other. So both sequences f(xik) and f(yik) should converge to f(p) by sequential continuity, but this can’t be true since the two sequences are always 𝜀 apart. □

8.5  (Optional) Equivalence of formulations of compactness

We will prove that:

Theorem 8.5.1 (Heine-Borel for general metric spaces)
For a metric space M, the following are equivalent:

(i)
Every sequence has a convergent subsequence,
(ii)
The space M is complete and totally bounded, and
(iii)
Every open cover has a finite subcover.

We leave the proof that (i) (ii) as ?? ; the idea of the proof is much in the spirit of ?? .

Proof that (i) and (ii) =⇒ (iii). We prove the following lemma, which is interesting in its own right.

Lemma 8.5.2 (Lebesgue number lemma)
Let M be a compact metric space and {Uα} an open cover. Then there exists a real number δ > 0, called a Lebesgue number for that covering, such that the δ-neighborhood of any point p lies entirely in some Uα.

Proof of lemma. Assume for contradiction that for every δ = 1∕k there is a point xk M such that its 1∕k-neighborhood isn’t contained in any Uα. In this way we construct a sequence x1, x2, …; thus we’re allowed to take a subsequence which converges to some x. Then for every 𝜀 > 0 we can find an integer n such that d(xn,x)+1∕n < 𝜀; thus the 𝜀-neighborhood at x isn’t contained in any Uα for every 𝜀 > 0. This is impossible, because we assumed x was covered by some open set.

Now, take a Lebesgue number δ for the covering. Since M is totally bounded, finitely many δ-neighborhoods cover the space, so finitely many Uα do as well. □

Proof that (iii) =⇒ (ii). One step is immediate:

Question 8.5.3. Show that the covering condition =⇒ totally bounded.

The tricky part is showing M is complete. Assume for contradiction it isn’t and thus that the sequence (xk) is Cauchy, but it doesn’t converge to any particular point.

Question 8.5.4. Show that this implies for each p M, there is an 𝜀p-neighborhood Up which contains at most finitely many of the points of the sequence (xk). (You will have to use the fact that xk↛p and (xk) is Cauchy.)

Now if we consider M = pUp we get a finite subcover of these open neighborhoods; but this finite subcover can only cover finitely many points of the sequence, by contradiction. □

8.6  A few harder problems to think about

The later problems are pretty hard; some have the flavor of IMO 3/6-style constructions. It’s important to draw lots of pictures so one can tell what’s happening. Of these ??  is definitely my favorite.

Problem 8A. Show that the closed interval [0,1] and open interval (0,1) are not homeomorphic.

Problem 8B. Let X be a topological space with the discrete topology. Under what conditions is X compact?

Problem 8C (The cofinite topology is quasicompact only). We let X be an infinite set and equip it with the cofinite topology: the open sets are the empty set and complements of finite sets. This makes X into a topological space. Show that X is quasicompact but not Hausdorff.

Problem 8D (Cantor’s intersection theorem). Let X be a compact topological space, and suppose

X =  K  ⊇ K  ⊇  K  ⊇ ...
      0     1    2

is an infinite sequence of nested nonempty closed subsets. Show that n0Kn.

Problem 8E (Tychonoff’s theorem). Let X and Y be compact metric spaces. Show that X × Y is compact. (This is also true for general topological spaces, but the proof is surprisingly hard, and we haven’t even defined X × Y in general yet.)

Problem 8F (Bolzano-Weierstraß theorem for general metric spaces).       PICProve that a metric space M is sequentially compact if and only if it is complete and totally bounded.

Problem 8G (Almost Arzelà-Ascoli theorem).       PICLet f1,f2,: [0,1] [100,100] be an equicontinuous sequence of functions, meaning

∀𝜀 > 0  ∃δ > 0  ∀n ∀x,y  (|x − y| < δ = ⇒ |fn(x) − fn(y)| < 𝜀)

Show that we can extract a subsequence fi1,fi2, of these functions such that for every x [0,1], the sequence fi1(x), fi2(x), … converges.

Problem 8H.       PICLet M = (M,d) be a bounded metric space. Suppose that whenever dis another metric on M for which (M,d) and (M,d) are homeomorphic (i.e. have the same open sets), then dis also bounded. Prove that M is compact.

Problem 8I.     PICPICIn this problem a “circle” refers to the boundary of a disk with nonzero radius.

(a)
Is it possible to partition the plane 2 into disjoint circles?
(b)
From the plane 2 we delete two distinct points p and q. Is it possible to partition the remaining points into disjoint circles?

Part IV
Linear Algebra

9  Vector spaces

This is a pretty light chapter. The point of it is to define what a vector space and a basis are. These are intuitive concepts that you may already know.

9.1  The definitions of a ring and field

Prototypical example for this section: , , and are rings; the latter two are fields.

I’ll very informally define a ring/field here, in case you skipped the earlier chapter.

In fact, if you replace “field” by “” everywhere in what follows, you probably won’t lose much. It’s customary to use the letter R for rings, and k or K for fields.

Finally, in case you skipped the chapter on groups, I should also mention:

9.2  Modules and vector spaces

Prototypical example for this section: Polynomials of degree at most n.

You intuitively know already that n is a “vector space”: its elements can be added together, and there’s some scaling by real numbers. Let’s develop this more generally.

Fix a commutative ring R. Then informally,

An R-module is any structure where you can add two elements and scale by elements of R.

Moreover, a vector space is just a module whose commanding ring is actually a field. I’ll give you the full definition in a moment, but first, examples…

Example 9.2.1 (Quadratic polynomials, aka my favorite example)
My favorite example of an -vector space is the set of polynomials of degree at most two, namely

{                      }
 ax2 + bx + c | a,b,c ∈ ℝ .

Indeed, you can add any two quadratics, and multiply by constants. You can’t multiply two quadratics to get a quadratic, but that’s irrelevant – in a vector space there need not be a notion of multiplying two vectors together.

In a sense we’ll define later, this vector space has dimension 3 (as expected!).

Example 9.2.2 (All polynomials)
The set of all polynomials with real coefficients is an -vector space, because you can add any two polynomials and scale by constants.

Example 9.2.3 (Euclidean space)

(a)
The complex numbers
{a+ bi | a,b ∈ ℝ }

form a real vector space. As we’ll see later, it has “dimension 2”.

(b)
The real numbers form a real vector space of dimension 1.
(c)
The set of 3D vectors
{(x,y,z) | x,y,z ∈ ℝ }

forms a real vector space, because you can add any two triples component-wise. Again, we’ll later explain why it has “dimension 3”.

Example 9.2.4 (More examples of vector spaces)

(a)
The set
  √ --  {     √ --        }
ℚ [ 2] =  a+ b  2 | a,b ∈ ℚ

has a structure of a -vector space in the obvious fashion: one can add any two elements, and scale by rational numbers. (It is not a real vector space – why?)

(b)
The set
{(x,y,z) | x+ y + z = 0 and x,y,z ∈ ℝ}

is a 2-dimensional real vector space.

(c)
The set of all functions f : is also a real vector space (since the notions f +g and c f both make sense for c ).

Now let me write the actual rules for how this multiplication behaves.

Definition 9.2.5. Let R be a commutative ring. An R-module starts with an additive abelian group M = (M,+) whose identity is denoted 0 = 0M. We additionally specify a left multiplication by elements of R. This multiplication must satisfy the following properties for r,r1,r2 R and m,m1,m2 M:

(i)
r1 (r2 m) = (r1r2) m.
(ii)
Multiplication is distributive, meaning
(r1 + r2)⋅m =  r1 ⋅m + r2 ⋅m and r ⋅(m1 + m2 ) = r ⋅m1 + r ⋅m2.
(iii)
1R m = m.
(iv)
0R m = 0M. (This is actually extraneous; one can deduce it from the first three.)

If R is a field we say M is an R-vector space; its elements are called vectors and the members of R are called scalars.

Abuse of Notation 9.2.6. In the above, we’re using the same symbol + for the addition of M and the addition of R. Sorry about that, but it’s kind of hard to avoid, and the point of the axioms is that these additions should be related. I’ll try to remember to put r m for the multiplication of the module and r1r2 for the multiplication of R.

Question 9.2.7. In ?? , I was careful to say “degree at most 2” instead of “degree 2”. What’s the reason for this? In other words, why is

{  2                      }
 ax + bx +c | a,b,c ∈ ℝ,a ⁄= 0

not an -vector space?

A couple less intuitive but somewhat important examples…

Example 9.2.8 (Abelian groups are -modules)
(Skip this example if you’re not comfortable with groups.)

(a)
The example of real polynomials
{ax2 + bx + c | a,b,c ∈ ℝ }

is also a -module! Indeed, we can add any two such polynomials, and we can scale them by integers.

(b)
The set of integers modulo 100, say 100, is a -module as well. Can you see how?
(c)
In fact, any abelian group G = (G,+) is a -module. The multiplication can be defined by
n ⋅g = g + ⋅⋅⋅+ g   (− n)⋅g = n ⋅(− g )
       ◟-n◝ t◜imes-◞

for n 0. (Here g is the additive inverse of g.)

Example 9.2.9 (Every ring is its own module)

(a)
can be thought of as an -vector space over itself. Can you see why?
(b)
By the same reasoning, we see that any commutative ring R can be thought of as an R-module over itself.

9.3  Direct sums

Prototypical example for this section: {ax2 + bx + c} = xx2, and 3 is the sum of its axes.

Let’s return to ?? , and consider

V = {ax2 + bx + c | a,b,c ∈ ℝ} .

Even though I haven’t told you what a dimension is, you can probably see that this vector space “should have” dimension 3. We’ll get to that in a moment.

The other thing you may have noticed is that somehow the x2, x and 1 terms don’t “talk to each other”. They’re totally unrelated. In other words, we can consider the three sets

x2 := {   2       }
  ax | a ∈ ℝ
x : = {bx | b ∈ ℝ}
: = {c | c ∈ ℝ}.

In an obvious way, each of these can be thought of as a “copy” of .

Then V quite literally consists of the “sums of these sets”. Specifically, every element of V can be written uniquely as the sum of one element from each of these sets. This motivates us to write

V = x2ℝ ⊕ xℝ ⊕  ℝ.

The notion which captures this formally is the direct sum.

Definition 9.3.1. Let M be an R-module. Let M1 and M2 be subsets of M which are themselves R-modules. Then we write M = M1 M2 and say M is a direct sum of M1 and M2 if every element from M can be written uniquely as the sum of an element from M1 and M2.

Example 9.3.2 (Euclidean plane)
Take the vector space 2 = {(x,y) | x ∈ ℝ,y ∈ ℝ}. We can consider it as a direct sum of its x-axis and y-axis:

X  = {(x,0) | x ∈ ℝ } and Y = {(0,y) | y ∈ ℝ}.

Then 2 = X Y .

This gives us a “top-down” way to break down modules into some disconnected components.

By applying this idea in reverse, we can also construct new vector spaces as follows. In a very unfortunate accident, the two names and notations for technically distinct things are exactly the same.

Definition 9.3.3. Let M and N be R-modules. We define the direct sum M N to be the R-module whose elements are pairs (m,n) M × N. The operations are given by

(m1, n1)+  (m2, n2) = (m1 + m2, n1 + n2).

and

r ⋅(m, n) = (r ⋅m, r ⋅n).

For example, while we technically wrote 2 = X Y , since each of X and Y is a copy of , we might as well have written 2∼=.

Abuse of Notation 9.3.4. The above illustrates an abuse of notation in the way we write a direct sum. The symbol has two meanings.

You can see that these definitions “kind of” coincide.

In this way, you can see that V should be isomorphic to ; we had V = x2x, but the 1, x, x2 don’t really talk to each other and each of the summands is really just a copy of at heart.

Definition 9.3.5. We can also define, for every positive integer n, the module

M ⊕n :=  M ⊕ M  ⊕ ⋅⋅⋅⊕ M  .
        ◟------◝◜-------◞
             n times

9.4  Linear independence, spans, and basis

Prototypical example for this section: {       }
 1,x,x2 is a basis of {                      }
  ax2 + bx + c | a,b,c ∈ ℝ.

The idea of a basis, the topic of this section, gives us another way to capture the notion that

V  = {ax2 + bx+ c | a,b,c ∈ ℝ }

is sums of copies of {1,x,x2}. This section should be very intuitive, if technical. If you can’t see why the theorems here “should” be true, you’re doing it wrong.

Let M be an R-module now. We define three very classical notions that you likely are already familiar with. If not, fall upon your notion of Euclidean space or V above.

Definition 9.4.1. A linear combination of some vectors v1,,vn is a sum of the form r1v1 + ⋅⋅⋅ + rnvn, where r1,,rn R. The linear combination is called trivial if r1 = r2 = ⋅⋅⋅ = rn = 0R, and nontrivial otherwise.

Definition 9.4.2. Consider a finite set of vectors v1,,vn in a module M.

The same definitions apply for an infinite set, with the proviso that all sums must be finite.

So by definition, {1, x,x2} is a basis for V . It’s not the only one: {2,x,x2} and {x + 4,x 2,x2 + x} are other examples of bases, though not as natural. However, the set S = {3 + x2,x + 1,5 + 2x + x2} is not a basis; it fails for two reasons:

With these new terms, we can say a basis is a linearly independent and spanning set.

Example 9.4.3 (More examples of bases)

(a)
Regard [√ --
  2] = {     √--        }
 a + b 2 | a,b ∈ ℚ as a -vector space. Then {1,√--
 2} is a basis.
(b)
If V is the set of all real polynomials, there is an infinite basis {1,x,x2,}. The condition that we only use finitely many terms just says that the polynomials must have finite degree (which is good).
(c)
Let V = {(x,y,z)x + y + z = 0 and x,y,z }. Then we expect there to be a basis of size 2, but unlike previous examples there is no immediately “obvious” choice. Some working examples include:
  • (1,1,0) and (1,0,1),
  • (0,1,1) and (1,0,1),
  • (5,3,8) and (2,1,1).

Exercise 9.4.4. Show that a set of vectors is a basis if and only if it is linearly independent and spanning. (Think about the polynomial example if you get stuck.)

Now we state a few results which assert that bases in vector spaces behave as nicely as possible.

Theorem 9.4.5 (Maximality and minimality of bases)
Let V be a vector space over some field k and take e1,,en V . The following are equivalent:

(a)
The ei form a basis.
(b)
The ei are spanning, but no proper subset is spanning.
(c)
The ei are linearly independent, but adding any other element of V makes them not linearly independent.

Remark 9.4.6 — If we replace V by a general module M over a commutative ring R, then (a) =⇒ (b) and (a) =⇒ (c) but not conversely.

Proof. Straightforward, do it yourself if you like. The key point to notice is that you need to divide by scalars for the converse direction, hence V is required to be a vector space instead of just a module for the implications (b) =⇒ (a) and (c) =⇒ (a). □

Theorem 9.4.7 (Dimension theorem for vector spaces)
If a vector space V has a finite basis, then every other basis has the same number of elements.

Proof. We prove something stronger: Assume v1,,vn is a spanning set while w1,,wm is linearly independent. We claim that n m.

Question 9.4.8. Show that this claim is enough to imply the theorem.

Let A0 = {v1,,vn} be the spanning set. Throw in w1: by the spanning condition, w1 = c1v1 + ⋅⋅⋅ + cnvn. There’s some nonzero coefficient, say cn. Thus

     -1     -c1     c2
vn = cnw1 − cn v1 − cn v2 − ....

Thus A1 = {v1,,vn1,w1} is spanning. Now do the same thing, throwing in w2, and deleting some element of the vi as before to get A2; the condition that the wi are linearly independent ensures that some vi coefficient must always not be zero. Since we can eventually get to Am, we have n m. □

Remark 9.4.9 (Generalizations)

The dimension theorem, true to its name, lets us define the dimension of a vector space as the size of any finite basis, if one exists. When it does exist we say V is finite-dimensional. So for example,

     {                      }
V  =  ax2 + bx+ c | a,b,c ∈ ℝ

has dimension three, because {      }
 1,x,x2 is a basis. That’s not the only basis: we could as well have written

{a(x2 − 4x )+ b(x+ 2)+ c | a,b,c ∈ ℝ}

and gotten the exact same vector space. But the beauty of the theorem is that no matter how we try to contrive the generating set, we always will get exactly three elements. That’s why it makes sense to say V has dimension three.

On the other hand, the set of all polynomials [x] is infinite-dimensional (which should be intuitively clear).

A basis e1,,en of V is really cool because it means that to specify v V , I only have to specify a1,,an k, and then let v = a1e1 + ⋅⋅⋅ + anen. You can even think of v as (a ,...,a )
  1     n. To put it another way, if V is a k-vector space we always have

V  = e1k ⊕ e2k ⊕ ⋅⋅⋅⊕ enk.

9.5  Linear maps

Prototypical example for this section: Evaluation of {ax2 + bx + c}at x = 3.

We’ve seen homomorphisms and continuous maps. Now we’re about to see linear maps, the structure preserving maps between vector spaces. Can you guess the definition?

Definition 9.5.1. Let V and W be vector spaces over the same field k. A linear map is a map T : V W such that:

(i)
We have T(v1+v2) = T(v1)+T(v2) for any v1,v2 V .1
(ii)
For any a k and v V , T(a v) = a T(v).

If this map is a bijection (equivalently, if it has an inverse), it is an isomorphism. We then say V and W are isomorphic vector spaces and write V ∼=W.

Example 9.5.2 (Examples of linear maps)

(a)
For any vector spaces V and W there is a trivial linear map sending everything to 0W W.
(b)
For any vector space V , there is the identity isomorphism id : V V .
(c)
The map 3 by (a,b,c)↦→4a + 2b + c is a linear map.
(d)
Let V be the set of real polynomials of degree at most 2. The map 3 V by (a,b,c)↦→ax2 + bx + c is an isomorphism.
(e)
Let V be the set of real polynomials of degree at most 2. The map V by ax2 + bx + c↦→9a + 3b + c is a linear map, which can be described as “evaluation at 3”.
(f)
Let W be the set of functions . The evaluation map W by f↦→f(0) is a linear map.
(g)
There is a map of -vector spaces [√ --
  2] [√ --
  2] called “multiply by √ --
  2”; this map sends a + b√--
 2↦→2b + a√ --
  2. This map is an isomorphism, because it has an inverse “multiply by 1√ --
  2”.

In the expression T(av) = aT(v), note that the first is the multiplication of V and the second is the multiplication of W. Note that this notion of isomorphism really only cares about the size of the basis:

Proposition 9.5.3 (n-dimensional vector spaces are isomorphic)
If V is an n-dimensional vector space, then V ∼
=kn.

Question 9.5.4. Let e1, …, en be a basis for V . What is the isomorphism? (Your first guess is probably right.)

Remark 9.5.5 — You could technically say that all finite-dimensional vector spaces are just kn and that no other space is worth caring about. But this seems kind of rude. Spaces often are more than just triples: ax2 + bx + c is a polynomial, and so it has some “essence” to it that you’d lose if you compressed it into (a,b,c).

Moreover, a lot of spaces, like the set of vectors (x,y,z) with x + y + z = 0, do not have an obvious choice of basis. Thus to cast such a space into kn would require you to make arbitrary decisions.

9.6  What is a matrix?

Now I get to tell you what a matrix is: it’s a way of writing a linear map in terms of bases.

Suppose we have a finite-dimensional vector space V with basis e1,,em and a vector space W with basis w1,,wn. I also have a map T : V W and I want to tell you what T is. It would be awfully inconsiderate of me to try and tell you what T(v) is at every point v. In fact, I only have to tell you what T(e1), …, T(em) are, because from there you can work out T(a1e1 + ⋅⋅⋅ + amem) for yourself:

T(a e + ⋅⋅⋅+ a  e  ) = a T (e) + ⋅⋅⋅+ a T (e ).
   1 1         m m     1   1          m    m

Since the ei are a basis, that tells you all you need to know about T.

Example 9.6.1 (Extending linear maps)
Let V = {ax2 + bx + c | a,b,c ∈ ℝ}. Then T(ax2 + bx + c) = aT(x2) + bT(x) + cT(1).

Now I can even be more concrete. I could tell you what T(e1) is, but seeing as I have a basis of W, I can actually just tell you what T(e1) is in terms of this basis. Specifically, there are unique a11,a21,,an1 k such that

T (e1) = a11w1 + a21w2 + ⋅⋅⋅+ an1wn.

So rather than telling you the value of T(e1) in some abstract space W, I could just tell you what a11,a21,,an1 were. Then I’d repeat this for T(e2), T(e3), all the way up to T(em), and that would tell you everything you need to know about T.

That’s where the matrix T comes from! It’s a concise way of writing down all mn numbers I need to tell you. To be explicit, the matrix for T is defined as the array

T = ⌊  |      |           |  ⌋
⌈                        ⌉
 T (e1)  T (e2)  ...  T(em )
◟--|------|-◝◜--------|---◞m columns}n rows
= ⌊                  ⌋
 a11  a12  ...  a1m
||a21  a22  ...  a2m||
|⌈ ..    ..   ...   .. |⌉
  .    .         .
 an1  an2  ...  anm.

Example 9.6.2 (An example of a matrix)
Here is a concrete example in terms of a basis. Let V = 3 with basis e1, e2, e3 and let W = 2 with basis w1, w2. If I have T : V W then uniquely determined by three values, for example:

T(e1) = 4w1 + 7w2
T(e2) = 2w1 + 3w2
T(e3) = w1

The columns then correspond to T(e1), T(e2), T(e3):

    [4  2  1]
T =  7  3  0

Example 9.6.3 (An example of a matrix after choosing a basis)
We again let V = {ax2 + bx + c} be the vector space of polynomials of degree at most 2. We fix the basis 1, x, x2 for it.

Consider the “evaluation at 3” map, a map V . We pick 1 as the basis element of the RHS; then we can write it as a 1 × 3 matrix

[       ]
 1  3  9

with the columns corresponding to T(1), T(x), T(x2).

From here you can actually work out for yourself what it means to multiply two matrices. Suppose we have picked a basis for three spaces U, V , W. Given maps T : U V and S : V W, we can consider their composition S T, i.e.

   T    S
U −→ V  −→  W.

Matrix multiplication is defined exactly so that the matrix ST is the same thing we get from interpreting the composed function S T as a matrix.

Exercise 9.6.4. Check this for yourself! For a concrete example let 2T
−→2 S
−→2 by T(e1) = 2e1 + 3e2 and T(e2) = 4e1 + 5e2, S(e1) = 6e1 + 7e2 and S(e2) = 8e1 + 9e2. Compute S(T(e1)) and S(T(e2)) and see how it compares to multiplying the matrices associated to S and T.

In particular, since function composition is associative, it follows that matrix multiplication is as well. To drive this point home,

A matrix is the laziest possible way to specify a linear map from V to W.

This means you can define concepts like the determinant or the trace of a matrix both in terms of an “intrinsic” map T : V W and in terms of the entries of the matrix. Since the map T itself doesn’t refer to any basis, the abstract definition will imply that the numerical definition doesn’t depend on the choice of a basis.

9.7  Subspaces and picking convenient bases

Prototypical example for this section: Any two linearly independent vectors in 3.

Definition 9.7.1. Let M be a left R-module. A submodule N of M is a module N such that every element of N is also an element of M. If M is a vector space then N is called a subspace.

Example 9.7.2 (Kernels)
The kernel of a map T : V W (written kerT) is the set of v V such that T(v) = 0W . It is a subspace of V , since it’s closed under addition and scaling (why?).

Example 9.7.3 (Spans)
Let V be a vector space and v1,,vm be any vectors of V . The span of these vectors is defined as the set

{a1v1 + ⋅⋅⋅+ amvm  | a1,...,am ∈ k}.

Note that it is a subspace of V as well!

Question 9.7.4. Why is 0V an element of each of the above examples? In general, why must any subspace contain 0V ?

Subspaces behave nicely with respect to bases.

Theorem 9.7.5 (Basis completion)
Let V be an n-dimensional space, and V a subspace of V . Then

(a)
V is also finite-dimensional.
(b)
If e1,,em is a basis of V , then there exist em+1,,en in V such that e1,,en is a basis of V .

Proof. Omitted, since it is intuitive and the proof is not that enlightening. (However, we will use this result repeatedly later on, so do take the time to internalize it now.) □

A very common use case is picking a convenient basis for a map T.

Theorem 9.7.6 (Picking a basis for linear maps)
Let T : V W be a map of finite-dimensional vector spaces, with n = dimV , m = dimW. Then there exists a basis v1,,vn of V and a basis w1,,wm of W, as well as a nonnegative integer k, such that

       {
T(v ) =  wi   if i ≤ k
   i     0W   if i > k.

Moreover dimkerT = n k and dimTimg(V ) = k.

Sketch of Proof. You might like to try this one yourself before reading on: it’s a repeated application of ?? .

Let kerT have dimension nk. We can pick vk+1,,vn a basis of kerT. Then extend it to a basis v1,,vn of V . The map T is injective over the span of v1,,vk (since only 0V is in the kernel) so its images in W are linearly independent. Setting wi = T(vi) for each i, we get some linearly independent set in W. Then extend it again to a basis of W. □

This theorem is super important, not only because of applications but also because it will give you the right picture in your head of how a linear map is supposed to look. I’ll even draw a cartoon of it to make sure you remember:

In particular, for T : V W, one can write V = kerT V , so that T annihilates its kernel while sending V to an isomorphic copy in W.

A corollary of this (which you should have expected anyways) is the so called rank-nullity theorem, which is the analog of the first isomorphism theorem.

Theorem 9.7.7 (Rank-nullity theorem)
Let V and W be finite-dimensional vector spaces. If T : V W, then

dim V = dim kerT + dim im  T.

Question 9.7.8. Conclude the rank-nullity theorem from ?? .

9.8  A cute application: Lagrange interpolation

Here’s a cute application2 of linear algebra to a theorem from high school.

Theorem 9.8.1 (Lagrange interpolation)
Let x1,,xn+1 be distinct real numbers and y1,,yn+1 any real numbers. Then there exists a unique polynomial P of degree at most n such that

P (xi) = yi

for every i.

When n = 1 for example, this loosely says there is a unique line joining two points.

Proof. The idea is to consider the vector space V of polynomials with degree at most n, as well as the vector space W = n+1.

Question 9.8.2. Check that dimV = n + 1 = dimW. This is easiest to do if you pick a basis for V , but you can then immediately forget about the basis once you finish this exercise.

Then consider the linear map T : V W given by

P ↦→  (P(x1),...,P(xn+1 )) .

This is indeed a linear map because, well, T(P + Q) = T(P) + T(Q) and T(cP) = cT(P). It also happens to be injective: if P kerT, then P(x1) = ⋅⋅⋅ = P(xn+1) = 0, but deg P n and so P can only be the zero polynomial.

So T is an injective map between vector spaces of the same dimension. Thus it is actually a bijection, which is exactly what we wanted. □

9.9  (Digression) Arrays of numbers are evil

As I’ll stress repeatedly, a matrix represents a linear map between two vector spaces. Writing it in the form of an m × n matrix is merely a very convenient way to see the map concretely. But it obfuscates the fact that this map is, well, a map, not an array of numbers.

If you took high school precalculus, you’ll see everything done in terms of matrices. To any typical high school student, a matrix is an array of numbers. No one is sure what exactly these numbers represent, but they’re told how to magically multiply these arrays to get more arrays. They’re told that the matrix

⌊ 1  0  ... 0⌋
| 0  1  ... 0|
|| .  .  .    .||
⌈ ..  ..   ..  ..⌉
  0  0  ... 1

is an “identity matrix”, because when you multiply by another matrix it doesn’t change. Then they’re told that the determinant is some magical combination of these numbers formed by this weird multiplication rule. No one knows what this determinant does, other than the fact that det(AB) = detAdetB, and something about areas and row operations and Cramer’s rule.

Then you go into linear algebra in college, and you do more magic with these arrays of numbers. You’re told that two matrices T1 and T2 are similar if

T2 = ST1S− 1

for some invertible matrix S. You’re told that the trace of a matrix TrT is the sum of the diagonal entries. Somehow this doesn’t change if you look at a similar matrix, but you’re not sure why. Then you define the characteristic polynomial as

pT (X ) = det(XI − T ).

Somehow this also doesn’t change if you take a similar matrix, but now you really don’t know why. And then you have the Cayley-Hamilton theorem in all its black magic: pT (T) is the zero map. Out of curiosity you Google the proof, and you find some ad-hoc procedure which still leaves you with no idea why it’s true.

This is terrible. What’s so special about T2 = ST1S1? Only if you know that the matrices are linear maps does this make sense: T2 is just T1 rewritten with a different choice of basis.

I really want to push the opposite view. Linear algebra is the study of linear maps, but it is taught as the study of arrays of numbers, and no one knows what these numbers mean. And for a good reason: the numbers are meaningless. They are a highly convenient way of encoding the matrix, but they are not the main objects of study, any more than the dates of events are the main objects of study in history.

The other huge downside is that people get the impression that the only (real) vector space in existence is n. As explained in ?? , while you can work this way if you’re a soulless robot, it’s very unnatural for humans to do so.

When I took Math 55a as a freshman at Harvard, I got the exact opposite treatment: we did all of linear algebra without writing down a single matrix. During all this time I was quite confused. What’s wrong with a basis? I didn’t appreciate until later that this approach was the morally correct way to treat the subject: it made it clear what was happening.

Throughout the Napkin, I’ve tried to strike a balance between these two approaches, using matrices when appropriate to illustrate the maps and to simplify proofs, but ultimately writing theorems and definitions in their morally correct form. I hope that this has both the advantage of giving the “right” definitions while being concrete enough to be digested. But I would like to say for the record that, if I had to pick between the high school approach and the 55a approach, I would pick 55a in a heartbeat.

9.10  A word on general modules

Prototypical example for this section: [√ --
  2] is a -module of rank two.

I focused mostly on vector spaces (aka modules over a field) in this chapter for simplicity, so I want to make a few remarks about modules over a general commutative ring R before concluding.

Firstly, recall that for general modules, we say “generating set” instead of “spanning set”. Shrug.

The main issue with rings is that our key theorem ??  fails in spectacular ways. For example, consider as a -module over itself. Then {2} is linearly independent, but it cannot be extended to a basis. Similarly, {2,3} is spanning, but one cannot cut it down to a basis. You can see why defining dimension is going to be difficult.

Nonetheless, there are still analogs of some of the definitions above.

Definition 9.10.1. An R-module M is called finitely generated if it has a finite generating set.

Definition 9.10.2. An R-module M is called free if it has a basis. As said before, the analogue of the dimension theorem holds, and we use the word rank to denote the size of the basis. As before, there’s an isomorphism M∼=Rn where n is the rank.

Example 9.10.3 (An example of a -module)
The -module

  √ --  {     √ --        }
ℤ [ 2] =  a+ b  2 | a,b ∈ ℤ

has a basis {1,√ --
  2}, so we say it is a free -module of rank 2.

Abuse of Notation 9.10.4 (Notation for groups). Recall that an abelian group can be viewed a -module (and in fact vice-versa!), so we can (and will) apply these words to abelian groups. We’ll use the notation GH for two abelian groups G and H for their Cartesian product, emphasizing the fact that G and H are abelian. This will happen when we study algebraic number theory and homology groups.

9.11  A few harder problems to think about

General hint: ??  will be your best friend for many of these problems.

Problem 9A. Let V and W be finite-dimensional vector spaces with nonzero dimension, and consider linear maps T : V W. Complete the following table by writing “sometimes”, “always”, or “never” for each entry.

T injectiveT surjectiveT isomorphism




If dimV > dimW
If dimV = dimW
If dimV < dimW

Problem 9B (Equal dimension vector spaces are usually isomorphisms). Let V and W be finite-dimensional vector spaces with dimV = dimW. Prove that for a map T : V W, the following are equivalent:

Problem 9C (Multiplication by √5--). Let V = [√5-] = {a + b√5-} be a two-dimensional -vector space, and fix the basis {1,√--
 5} for it. Write down the 2 × 2 matrix with rational coefficients that corresponds to multiplication by √--
 5.

Problem 9D (Multivariable Lagrange interpolation). Let S 2 be a set of n lattice points. Prove that there exists a nonzero two-variable polynomial p with real coefficients, of degree at most √ ---
  2n, such that p(x,y) = 0 for every (x,y) S.

Problem 9E (Putnam 2003). Do there exist polynomials a(x), b(x), c(y), d(y) such that

1+ xy + (xy)2 = a(x)c(y)+ b(x)d(y)

holds identically?

Problem 9F (TSTST 2014).       PICLet P(x) and Q(x) be arbitrary polynomials with real coefficients, and let d be the degree of P(x). Assume that P(x) is not the zero polynomial. Prove that there exist polynomials A(x) and B(x) such that

(i)
Both A and B have degree at most d∕2,
(ii)
At most one of A and B is the zero polynomial,
(iii)
P divides A + Q B.

Problem 9G (Idempotents are projection maps). Let P : V V be a linear map, where V is a vector space (not necessarily finite-dimensional). Suppose P is idempotent, meaning P(P(v)) = P(v) for each v V , or equivalently P is the identity on its image. Prove that

V  = kerP ⊕ im P.

Thus we can think of P as projection onto the subspace imP.

Problem 9H.       PICLet V be a finite dimensional vector space. Let T : V V be a linear map, and let Tn: V V denote T applied n times. Prove that there exists an integer N such that

V = ker TN ⊕ im TN .

10  Eigen-things

This chapter will develop the theory of eigenvalues and eigenvectors, the so-called “Jordan canonical form”. (Later on we will use it to define the characteristic polynomial.)

10.1  Why you should care

We know that a square matrix T is really just a linear map from V to V . What’s the simplest type of linear map? It would just be multiplication by some scalar λ, which would have associated matrix (in any basis!)

    ⌊            ⌋
     λ  0   ...  0
    |0  λ   ...  0|
T = || .  .  .    .|| .
    ⌈ ..  ..   ..   ..⌉
     0  0   ...  λ

That’s perhaps too simple, though. If we had a fixed basis e1,,en then another very “simple” operation would just be scaling each basis element ei by λi, i.e. a diagonal matrix of the form

    ⌊                ⌋
      λ1  0   ...  0
    || 0   λ2  ...  0 ||
T = |⌈  ..   ..  ...  .. |⌉ .
       .   .       .
      0   0   ...  λn

These maps are more general. Indeed, you can, for example, compute T100 in a heartbeat: the map sends e1 λ1100e1. (Try doing that with an arbitrary n × n matrix.)

Of course, most linear maps are probably not that nice. Or are they?

Example 10.1.1 (Getting lucky)
Let V be some two-dimensional vector space with e1 and e2 as basis elements. Let’s consider a map T : V V by e1↦→2e1 and e2↦→e1 +3e2, which you can even write concretely as

    [    ]
T =  2  1    in basis e , e .
     0  3            1  2

This doesn’t look anywhere as nice until we realize we can rewrite it as

e1 ↦→2e1
e1 + e2↦→3(e1 + e2).

So suppose we change to the basis e1 and e1 + e2. Thus in the new basis,

     [    ]
      2  0
T  =  0  3   in basis e1, e1 + e2.

So our completely random-looking map, under a suitable change of basis, looks like the very nice maps we described before!

In this chapter, we will be making our luck, and we will see that our better understanding of matrices gives us the right way to think about this.

10.2  Warning on assumptions

Most theorems in this chapter only work for

On the other hand, the definitions work fine without these assumptions.

10.3  Eigenvectors and eigenvalues

Let k be a field and V a vector space over it. In the above example, we saw that there were two very nice vectors, e1 and e1 + e2, for which V did something very simple. Naturally, these vectors have a name.

Definition 10.3.1. Let T : V V and v V a nonzero vector. We say that v is an eigenvector if T(v) = λv for some λ k (possibly zero, but remember v0). The value λ is called an eigenvalue of T.

We will sometimes abbreviate “v is an eigenvector with eigenvalue λ” to just “v is a λ-eigenvector”.

Of course, no mention to a basis anywhere.

Example 10.3.2 (An example of an eigenvector and eigenvalue)
Consider the example earlier with T = [2  1]
 0  3.

(a)
Note that e1 and e1 + e2 are 2-eigenvectors and 3-eigenvectors.
(b)
Of course, 5e1 is also an 2-eigenvector.
(c)
And, 7e1 + 7e2 is also a 3-eigenvector.

So you can quickly see the following observation.

Question 10.3.3. Show that the λ-eigenvectors, together with {0} form a subspace.

Definition 10.3.4. For any λ, we define the λ-eigenspace as the set of λ-eigenvectors together with 0.

This lets us state succinctly that “2 is an eigenvalue of T with one-dimensional eigenspace spanned by e1”.

Unfortunately, it’s not exactly true that eigenvalues always exist.

Example 10.3.5 (Eigenvalues need not exist)
Let V = 2 and let T be the map which rotates a vector by 90 around the origin. Then T(v) is not a multiple of v for any v V , other than the trivial v = 0.

However, it is true if we replace k with an algebraically closed field1 .

Theorem 10.3.6 (Eigenvalues always exist over algebraically closed fields)
Suppose k is an algebraically closed field. Let V be a finite dimensional k-vector space. Then if T : V V is a linear map, there exists an eigenvalue λ k.

Proof. (From [?]) The idea behind this proof is to consider “polynomials” in T. For example, 2T2 4T + 5 would be shorthand for 2T(T(v)) 4T(v) + 5v. In this way we can consider “polynomials” P(T); this lets us tie in the “algebraically closed” condition. These polynomials behave nicely:

Question 10.3.7. Show that P(T) + Q(T) = (P + Q)(T) and P(T) Q(T) = (P Q)(T).

Let n = dimV < and fix any nonzero vector v V , and consider vectors v, T(v), …, Tn(v). There are n + 1 of them, so they can’t be linearly independent for dimension reasons; thus there is a nonzero polynomial P such that P(T) is zero when applied to v. WLOG suppose P is a monic polynomial, and thus P(z) = (z r1)(z rm) say. Then we get

0 = (T − r1id) ∘(T − r2id )∘⋅⋅⋅∘ (T − rmid)(v)

(where id is the identity matrix). This means at least one of T riid is not injective, i.e. has a nontrivial kernel, which is the same as an eigenvector. □

So in general we like to consider algebraically closed fields. This is not a big loss: any real matrix can be interpreted as a complex matrix whose entries just happen to be real, for example.

10.4  The Jordan form

So that you know exactly where I’m going, here’s the main theorem.

Definition 10.4.1. A Jordan block is an n × n matrix of the following shape:

⌊                      ⌋
 λ   1  0  0  ...  0  0
|| 0  λ  1  0  ...  0  0||
|| 0  0  λ  1  ...  0  0||
| 0  0  0  λ  ...  0  0|
|| .  .  .  .  .    .  .|| .
|| ..  ..  ..  ..   ..  ..  ..||
⌈ 0  0  0  0  ...  λ  1⌉
  0  0  0  0  ...  0  λ

In other words, it has λ on the diagonal, and 1 above it. We allow n = 1, so [λ] is a Jordan block.

Theorem 10.4.2 (Jordan canonical form)
Let T : V V be a linear map of finite-dimensional vector spaces over an algebraically closed field k. Then we can choose a basis of V such that the matrix T is “block-diagonal” with each block being a Jordan block.

Such a matrix is said to be in Jordan form. This form is unique up to rearranging the order of the blocks.

As an example, this means the matrix should look something like:

⌊                                     ⌋
  λ1  1
|| 0   λ1                              ||
|         λ2                          |
||             λ3  1   0               ||
||             0   λ   1               ||
||                  3                  ||
||             0   0   λ3              ||
||                         ...         ||
⌈                             λm    1 ⌉
                               0   λ
                                    m

Question 10.4.3. Check that diagonal matrices are the special case when each block is 1 × 1.

What does this mean? Basically, it means our dream is almost true. What happens is that V can get broken down as a direct sum

V = J1 ⊕ J2 ⊕ ⋅⋅⋅⊕ Jm

and T acts on each of these subspaces independently. These subspaces correspond to the blocks in the matrix above. In the simplest case, dimJi = 1, so Ji has a basis element e for which T(e) = λie; in other words, we just have a simple eigenvalue. But on occasion, the situation is not quite so simple, and we have a block of size greater than 1; this leads to 1’s just above the diagonals.

I’ll explain later how to interpret the 1’s, when I make up the word descending staircase. For now, you should note that even if dimJi 2, we still have a basis element which is an eigenvector with eigenvalue λi.

Example 10.4.4 (A concrete example of Jordan form)
Let T : k6 k6 and suppose T is given by the matrix

    ⌊                ⌋
     5  0  0  0  0  0
    ||0  2  1  0  0  0||
    ||0  0  2  0  0  0||
T = ||0  0  0  7  0  0|| .
    ⌈0  0  0  0  3  0⌉
     0  0  0  0  0  3

Reading the matrix, we can compute all the eigenvectors and eigenvalues: for any constants a,b k we have

T(a e1) = 5a e1
T(a e2) = 2a e2
T(a e4) = 7a e4
T(a e5 + b e6) = 3[a ⋅e5 + b ⋅e6].

The element e3 on the other hand, is not an eigenvector since T(e3) = e2 + 2e3.

10.5  Nilpotent maps

Bear with me for a moment. First, define:

Definition 10.5.1. A map T : V V is nilpotent if Tm is the zero map for some integer m. (Here Tm means “T applied m times”.)

What’s an example of a nilpotent map?

Example 10.5.2 (The “descending staircase”)
Let V = k3 have basis e1, e2, e3. Then the map T which sends

e3 ↦→  e2 ↦→ e1 ↦→ 0

is nilpotent, since T(e1) = T2(e2) = T3(e3) = 0, and hence T3(v) = 0 for all v V .

The 3 × 3 descending staircase has matrix representation

    ⌊        ⌋
      0  1  0
T = ⌈ 0  0  1⌉ .
      0  0  0

You’ll notice this is a Jordan block.

Exercise 10.5.3. Show that the descending staircase above has 0 as its only eigenvalue.

That’s a pretty nice example. As another example, we can have multiple such staircases.

Example 10.5.4 (Double staircase)
Let V = k5 have basis e1, e2, e3, e4, e5. Then the map

e  ↦→  e ↦→  e ↦→  0 and e ↦→  e ↦→  0
 3     2    1          5    4

is nilpotent.

Picture, with some zeros omitted for emphasis:

    ⌊             ⌋
     0  1  0
    ||0  0  1      ||
T = ||0  0  0      ||
    ⌈         0  1⌉
              0  0

You can see this isn’t really that different from the previous example; it’s just the same idea repeated multiple times. And in fact we now claim that all nilpotent maps have essentially that form.

Theorem 10.5.5 (Nilpotent Jordan)
Let V be a finite-dimensional vector space over an algebraically closed field k. Let T : V V be a nilpotent map. Then we can write V = i=1mV i where each V i has a basis of the form vi, T(vi), …, Tdim V i1(vi) for some vi V i.

Hence:

Every nilpotent map can be viewed as independent staircases.

Each chain vi, T(vi), T(T(vi)), …is just one staircase. The proof is given later, but first let me point out where this is going.

Here’s the punch line. Let’s take the double staircase again. Expressing it as a matrix gives, say

    ⌊ 0  1  0      ⌋
    |              |
    | 0  0  1      |
S = || 0  0  0      ||.
    ⌈          0  1⌉
               0  0

Then we can compute

          ⌊λ  1   0      ⌋
          |0  λ   1      |
          ||              ||
S + λid = |⌈0  0  λ       |⌉ .
                     λ  1
                     0  λ

It’s a bunch of λ Jordan blocks! This gives us a plan to proceed: we need to break V into a bunch of subspaces such that T λid is nilpotent over each subspace. Then Nilpotent Jordan will finish the job.

10.6  Reducing to the nilpotent case

Definition 10.6.1. Let T : V V . A subspace W V is called T-invariant if T(w) W for any w W. In this way, T can be thought of as a map W W.

In this way, the Jordan form is a decomposition of V into invariant subspaces.

Now I’m going to be cheap, and define:

Definition 10.6.2. A map T : V V is called indecomposable if it’s impossible to write V = W1 W2 where both W1 and W2 are nontrivial T-invariant spaces.

Picture of a decomposable map:

⌊     |0  0  0⌋
| W1  |       |
|-----|0--0--0|
||0  0 |       ||
⌈0  0 |  W2   ⌉
 0  0 |

As you might expect, we can break a space apart into “indecomposable” parts.

Proposition 10.6.3 (Invariant subspace decomposition)
Let V be a finite-dimensional vector space. Given any map T : V V , we can write

V = V1 ⊕ V2 ⊕ ⋅⋅⋅⊕ Vm

where each V i is T-invariant, and for any i the map T : V i V i is indecomposable.

Proof. Same as the proof that every integer is the product of primes. If V is not decomposable, we are done. Otherwise, by definition write V = W1 W2 and then repeat on each of W1 and W2. □

Incredibly, with just that we’re almost done! Consider a decomposition as above, so that T : V 1 V 1 is an indecomposable map. Then T has an eigenvalue λ1, so let S = T λ1id; hence kerS{0}.

Question 10.6.4. Show that V 1 is also S-invariant, so we can consider S : V 1 V 1.

By ?? , we have

          N       N
V1 = kerS   ⊕ im S

for some N. But we assumed T was indecomposable, so this can only happen if imSN = {0} and kerSN = V 1 (since kerSN contains our eigenvector). Hence S is nilpotent, so it’s a collection of staircases. In fact, since T is indecomposable, there is only one staircase. Hence V 1 is a Jordan block, as desired.

10.7  (Optional) Proof of nilpotent Jordan

The proof is just induction on dimV . Assume dimV 1, and let W = Timg(V ) be the image of V . Since T is nilpotent, we must have W V . Moreover, if W = {0} (i.e. T is the zero map) then we’re already done. So assume {0} W V .

By the inductive hypothesis, we can select a good basis of W:

ℬ′ = {T(v1),T(T(v1)),T(T(T(v1))),
T(v2),T(T(v2)),T(T(T(v2))),
,
T(v),T(T(v)),T(T(T(v))),}

for some T(vi) W (here we have taken advantage of the fact that each element of W is itself of the form T(v) for some v).

Also, note that there are exactly elements of ℬ′ which are in kerT (namely the last element of each of the staircases). We can thus complete it to a basis v+1,,vm (where m = dimkerT). (In other words, the last element of each staircase plus the mnew ones are a basis for kerT.)

Now consider

= {v1,T(v1),T(T(v1)),T(T(T(v1))),
v2,T(v2),T(T(v2)),T(T(T(v2))),
,
v,T(v),T(T(v)),T(T(T(v))),
v+1,v+2,,vm}.

Question 10.7.1. Check that there are exactly + dimW + (dimkerT ) = dimV elements.

Exercise 10.7.2. Show that all the elements are linearly independent. (Assume for contradiction there is some linear dependence, then take T of both sides.)

Hence is a basis of the desired form.

10.8  Algebraic and geometric multiplicity

Prototypical example for this section: The matrix T below.

This is some convenient notation: let’s consider the matrix in Jordan form

    ⌊                ⌋
     7  1
    ||0  7            ||
    ||      9         ||
T = |         7  1  0| .
    |⌈         0  7  1|⌉

              0  0  7

We focus on the eigenvalue 7, which appears multiple times, so it is certainly “repeated”. However, there a two different senses in which you could say it is repeated.

Question 10.8.1. In this example, how many times do you need to apply T 7id to e6 to get zero?

Both these notions are valid, so we will name both. To preserve generality, we first state the “intrinsic” definition.

Definition 10.8.2. Let T : V V be a linear map and λ a scalar.

(Silly edge case: we allow “multiplicity zero” if λ is not an eigenvalue at all.)

However in practice you should just count the Jordan blocks.

Example 10.8.3 (An example of eigenspaces via Jordan form)
Retain the matrix T mentioned earlier and let λ = 7.

To be completely explicit, here is how you think of these in practice:

Proposition 10.8.4 (Geometric and algebraic multiplicity vs Jordan blocks)
Assume T : V V is a linear map of finite-dimensional vector spaces, written in Jordan form. Let λ be a scalar. Then

Question 10.8.5. Show that the geometric multiplicity is always less than or equal to the algebraic multiplicity.

This actually gives us a tentative definition:

This definition is okay, but it has the disadvantage of requiring the ground field to be algebraically closed. It is also not the definition that is easiest to work with computationally. The next two chapters will give us a better definition.

10.9  A few harder problems to think about

Problem 10A (Sum of algebraic multiplicities). Given a 2018-dimensional complex vector space V and a map T : V V , what is the sum of the algebraic multiplicities of all eigenvalues of T?

Problem 10B (The word “diagonalizable”). A linear map T : V V (where dimV is finite) is said to be diagonalizable if it has a basis e1, …, en such that each ei is an eigenvector.

(a)
Explain the name “diagonalizable”.
(b)
Suppose we are working over an algebraically closed field. Then show that that T is diagonalizable if and only if for any λ, the geometric multiplicity of λ equals the algebraic multiplicity of λ.

Problem 10C (Switcharoo). Let V be the -vector space with basis e1 and e2. The map T : V V sends T(e1) = e2 and T(e2) = e1. Determine the eigenspaces of T.

Problem 10D (Writing a polynomial backwards). Define the complex vector space V of polynomials with degree at most 2, say V = {                      }
 ax2 + bx + c | a,b,c ∈ ℂ. Define T : V V by

T (ax2 + bx + c) = cx2 + bx+ a.

Determine the eigenspaces of T.

Problem 10E (Differentiation of polynomials). Let V = [x] be the real vector space of all real polynomials. Note that -d
dx: V V is a linear map (for example it sends x3 to 3x2). Which real numbers are eigenvalues of this map?

Problem 10F (Differentiation of functions). Let V be the real vector space of all infinitely differentiable functions . Note that -d
dx: V V is a linear map (for example it sends cosx to sinx). Which real numbers are eigenvalues of this map?

11  Dual space and trace

You may have learned in high school that given a matrix

[    ]
 a  c
 b  d

the trace is the sum along the diagonals a + d and the determinant is ad bc. But we know that a matrix is somehow just encoding a linear map using a choice of basis. Why would these random formulas somehow not depend on the choice of a basis?

In this chapter, we are going to give an intrinsic definition of TrT, where T : V V and dimV < . This will give a coordinate-free definition which will in particular imply the trace a + d doesn’t change if we take a different basis.

In doing so, we will introduce two new constructions: the tensor product V W (which is a sort of product of two spaces, with dimension dimV dimW) and the dual space V , which is the set of linear maps V k (a k-vector space). Later on, when we upgrade from a vector space V to an inner product space V , we will see that the dual space gives a nice interpretation of the “transpose” of a matrix. You’ll already see some of that come through here.

The trace is only defined for finite-dimensional vector spaces, so if you want you can restrict your attention to finite-dimensional vector spaces for this chapter. (On the other hand we do not need the ground field to be algebraically closed.)

The next chapter will then do the same for the determinant.

11.1  Tensor product

Prototypical example for this section: [x] [y] = [x,y].

We know that dim(V W) = dimV + dimW, even though as sets V W looks like V × W. What if we wanted a real “product” of spaces, with multiplication of dimensions?

For example, let’s pull out my favorite example of a real vector space, namely

    {                      }
V =   ax2 + bx + c | a,b,c ∈ ℝ .

Here’s another space, a little smaller:

W  = {dy + e | d,e ∈ ℝ }.

If we take the direct sum, then we would get some rather unnatural vector space of dimension five (whose elements can be thought of as pairs (ax2 + bx + c,dy + e)). But suppose we want a vector space whose elements are products of polynomials in V and W; it would contain elements like 4x2y + 5xy + y + 3. In particular, the basis would be

{ 2   2         }
 x y,x ,xy,x,y,1

and thus have dimension six.

For this we resort to the tensor product. It does exactly this, except that the “multiplication” is done by a scary1 symbol : think of it as a “wall” that separates the elements between the two vector spaces. For example, the above example might be written as

4x2 ⊗ y + 5x ⊗ y + 1 ⊗ y + 3⊗ 1.

(This should be read as (4x2 y) + (5xy) + ; addition comes after .) Of course there should be no distinction between writing 4x2 y and x2 4y or even 2x2 2y. While we want to keep the x and y separate, the scalars should be free to float around.

Of course, there’s no need to do everything in terms of just the monomials. We are free to write

(x+ 1) ⊗ (y + 1).

If you like, you can expand this as

x ⊗ y + 1 ⊗ y + x⊗ 1 + 1 ⊗ 1.

Same thing. The point is that we can take any two of our polynomials and artificially “tensor” them together.

The definition of the tensor product does exactly this, and nothing else.2

Definition 11.1.1. Let V and W be vector spaces over the same field k. The tensor product V kW is the abelian group generated by elements of the form v w, subject to relations

(v1 + v2) w = v1 w + v2 w
v (w1 + w2) = v w1 + v w2
(c v) w = v (c w).

As a vector space, its action is given by c (v w) = (c v) w = v (c w).

Here’s another way to phrase the same idea. We define a pure tensor as an element of the form v w for v V and w W. But we let the wall be “permeable” in the sense that

(c⋅v) ⊗ w = v ⊗ (c⋅w) = c⋅(v ⊗ w)

and we let multiplication and addition distribute as we expect. Then V W consists of sums of pure tensors.

Example 11.1.2 (Infinite-dimensional example of tensor product: two-variable polynomials)
Although it’s not relevant to this chapter, this definition works equally well with infinite-dimensional vector spaces. The best example might be

ℝ [x ]⊗  ℝ[y] = ℝ[x,y].
      ℝ

That is, the tensor product of polynomials in x with real polynomials in y turns out to just be two-variable polynomials [x,y].

Remark 11.1.3 (Warning on sums of pure tensors) Remember the elements of V kW really are sums of these pure tensors! If you liked the previous example, this fact has a nice interpretation — not every polynomial in [x,y] = [x] [y] factors as a polynomial in x times a polynomial in y (i.e. as pure tensors f(x) g(y)). But they all can be written as sums of pure tensors xa yb.

As the example we gave suggested, the basis of V kW is literally the “product” of the bases of V and W. In particular, this fulfills our desire that dim(V kW) = dimV dimW.

Proposition 11.1.4 (Basis of V W)
Let V and W be finite-dimensional k-vector spaces. If e1,,em is a basis of V and f1,,fn is a basis of W, then the basis of V kW is precisely ei fj, where i = 1,,m and j = 1,,n.

Proof. Omitted; it’s easy at least to see that this basis is spanning. □

Example 11.1.5 (Explicit computation)
Let V have basis e1, e2 and W have basis f1,f2. Let v = 3e1 + 4e2 V and w = 5f1 + 6f2 W. Let’s write v w in this basis for V kW:

v w = (3e1 + 4e2) (5f1 + 6f2)
= (3e1) (5f1) + (4e2) (5f1) + (3e1) (6f2) + (4e2) (6f2)
= 15(e1 f1) + 20(e2 f1) + 18(e1 f2) + 24(e2 f2).

So you can see why tensor products are a nice “product” to consider if we’re really interested in V × W in a way that’s more intimate than just a direct sum.

Abuse of Notation 11.1.6. Moving forward, we’ll almost always abbreviate k to just , since k is usually clear.

Remark 11.1.7 — Observe that to define a linear map V W X, I only have to say what happens to each pure tensor v w, since the pure tensors generate V W. But again, keep in mind that V W consists of sums of these pure tensors! In other words, V W is generated by pure tensors.

Remark 11.1.8 — Much like the Cartesian product A × B of sets, you can tensor together any two vector spaces V and W over the same field k; the relationship between V and W is completely irrelevant. One can think of the as a “wall” through which one can pass scalars in k, but otherwise keeps the elements of V and W separated. Thus, is content-agnostic.

This also means that even if V and W have some relation to each other, the tensor product doesn’t remember this. So for example v 11 v, just like (g,1G)(1G,g) in the group G × G.

11.2  Dual space

Prototypical example for this section: Rotate a column matrix by 90 degrees.

Consider the following vector space:

Example 11.2.1 (Functions from 3 )
The set of real functions f(x,y,z) is an infinite-dimensional real vector space. Indeed, we can add two functions to get f + g, and we can think of functions like 2f.

This is a terrifyingly large vector space, but you can do some reasonable reductions. For example, you can restrict your attention to just the linear maps from 3 to .

That’s exactly what we’re about to do. This definition might seem strange at first, but bear with me.

Definition 11.2.2. Let V be a k-vector space. Then V , the dual space of V , is defined as the vector space whose elements are linear maps from V to k.

The addition and multiplication are pointwise: it’s the same notation we use when we write cf + g to mean c f(x) + g(x). The dual space itself is less easy to think about.

Let’s try to find a basis for V . First, here is a very concrete interpretation of the vector space. Suppose for example V = 3. We can think of elements of V as column matrices, like

   ⌊ 2⌋
   ⌈  ⌉
v =  5  ∈ V.
     9

Then a linear map f : V k can be interpreted as a row matrix:

f = [3  4  5] ∈ V∨.

Then

                ⌊ ⌋
       [       ] 2
f(v) =  3  4  5 ⌈5⌉ =  71.
                 9

More precisely: to specify a linear map V k, I only have to tell you where each basis element of V goes. In the above example, f sends e1 to 3, e2 to 4, and e3 to 5. So f sends

2e1 + 5e2 + 9e3 ↦→ 2⋅3 + 5⋅4 + 9⋅5 = 71.

Let’s make all this precise.

Proposition 11.2.3 (The dual basis for V )
Let V be a finite-dimensional vector space with basis e1,,en. For each i consider the function ei: V k defined by

         {
 ∨         1  i = j
ei (ej) =   0  i ⁄= j.

In more humane terms, ei(v) gives the coefficient of ei in v.

Then e1, e2, …, en is a basis of V .

Example 11.2.4 (Explicit example of element in V )
In this notation, f = 3e1 + 4e2 + 5e3. Do you see why the “sum” notation works as expected here? Indeed

f(e1) = (3e1 + 4e 2 + 5e 3)(e 1)
= 3e1(e 1) + 4e2(e 1) + 5e3(e 1)
= 3 1 + 4 0 + 5 0 = 3.

That’s exactly what we wanted.

You might be inclined to point out that V ∼=V at this point, since there’s an obvious isomorphism ei↦→ei. You might call it “rotating the column matrix by 90”. The issue is that this isomorphism depends very much on which basis you choose: if I pick a different basis, then the isomorphism will be intrinsically different.

It is true that V and V are isomorphic for finite-dimensional V , but you should already know that any two k-vector spaces of the same dimension are isomorphic. In light of this, the fact that V ∼
=V is not especially impressive.

11.3  V W gives matrices from V to W

Goal of this section:

If V and W are finite-dimensional k-vector spaces then V W represents linear maps V W.

Here’s the intuition. If V is three-dimensional and W is five-dimensional, then we can think of the maps V W as a 5 × 3 array of numbers. We want to think of these maps as a vector space: (since one can add or scale matrices). So it had better be a vector space with dimension 15, but just saying “k15” is not really that satisfying (what is the basis?).

To do better, we consider the tensor product

V ∨ ⊗ W

which somehow is a product of maps out of V and the target space W. We claim that this is in fact the space we want: i.e. there is a natural bijection between elements of V W and linear maps from V to W.

First, how do we interpret an element of V W as a map V W? For concreteness, suppose V has a basis e1, e2, e3, and W has a basis f1, f2, f3, f4, f5. Consider an element of V W, say

 ∨                 ∨
e1 ⊗ (f2 + 2f4) + 4e2 ⊗ f5.

We want to interpret this element as a function V W: so given a v V , we want to output an element of W. There’s really only one way to do this: feed in v V into the V guys on the left. That is, take the map

      ∨                  ∨
v ↦→  e1(v)⋅(f2 + 2f4) + 4e2(v)⋅f5 ∈ W.

So, there’s a natural way to interpret any element ξ1 w1 + ⋅⋅⋅ + ξm wm V W as a linear map V W. The claim is that in fact, every linear map V W has such an interpretation.

First, for notational convenience,

Definition 11.3.1. Let Hom(V,W) denote the set of linear maps from V to W (which one can interpret as matrices which send V to W), viewed as a vector space over k. (The “Hom” stands for homomorphism.)

Question 11.3.2. Identify Hom(V,k) by name.

We can now write down something that’s more true generally.

Theorem 11.3.3 (V W linear maps V W)
Let V and W be finite-dimensional vector spaces. We described a map

     ∨
Ψ : V  ⊗ W  → Hom  (V,W )

by sending ξ1 w1 + ⋅⋅⋅ + ξm wm to the linear map

v ↦→ ξ1(v)w1 + ⋅⋅⋅+ ξm(v)wm.

Then Ψ is an isomorphism of vector spaces, i.e. every linear map V W can be uniquely represented as an element of V W in this way.

The above is perhaps a bit dense, so here is a concrete example.

Example 11.3.4 (Explicit example)
Let V = 2 and take a basis e1, e2 of V . Then define T : V V by

    [    ]
T =  1  2  .
     3  4

Then we have

Ψ (e∨1 ⊗ e1 + 2e∨2 ⊗ e1 + 3e∨1 ⊗ e2 + 4e∨2 ⊗ e2) = T.

The beauty is that the Ψ definition is basis-free; thus even if we change the basis, although the above expression will look completely different, the actual element in V V doesn’t change.

Despite this, we’ll indulge ourselves in using coordinates for the proof.

Proof of ?? . This looks intimidating, but it’s actually not difficult. We proceed in two steps:

1.
First, we check that Ψ is surjective; every linear map has at least one representation in V W. To see this, take any T : V W. Suppose V has basis e1, e2, e3 and that T(e1) = w1, T(e2) = w2 and T(e3) = w3. Then the element
e∨ ⊗ w  + e∨ ⊗ w  + e∨⊗ w
 1     1   2    2    3    3

works, as it is contrived to agree with T on the basis elements ei.

2.
So it suffices to check now that dimV W = dimHom(V,W). Certainly, V W has dimension dimV dimW. But by viewing Hom(V,W) as dimV dimW matrices, we see that it too has dimension dimV dimW. □

So there is a natural isomorphism V W∼=Hom(V,W). While we did use a basis liberally in the proof that it works, this doesn’t change the fact that the isomorphism is “God-given”, depending only on the spirit of V and W itself and not which basis we choose to express the vector spaces in.

11.4  The trace

We are now ready to give the definition of a trace. Recall that a square matrix T can be thought of as a map T : V V . According to the above theorem,

              ∨
Hom (V,V ) ∼= V  ⊗ V

so every map V V can be thought of as an element of V V . But we can also define an evaluation map ev : V V k by “collapsing” each pure tensor: f v↦→f(v). So this gives us a composed map            ∼=--  ∨      -ev----
Hom (V,V )     V  ⊗ V         k.
This result is called the trace of a matrix T.

Example 11.4.1 (Example of a trace)
Continuing the previous example,

Tr T = e∨1(e1)+ 2e∨2(e1)+ 3e∨1(e2)+  4e∨2(e2) = 1 + 0+ 0 + 4 = 5.

And that is why the trace is the sum of the diagonal entries.

11.5  A few harder problems to think about

Problem 11A (Trace is sum of eigenvalues). Let V be an n-dimensional vector space over an algebraically closed field k. Let T : V V be a linear map with eigenvalues λ1, λ2, …, λn (counted with algebraic multiplicity). Show that TrT = λ1 + ⋅⋅⋅ + λn.

Problem 11B (Product of traces). Let T : V V and S : W W be linear maps of finite-dimensional vector spaces V and W. Define T S : V W V W by v w↦→T(v) S(w). Prove that

Tr(T ⊗ S) = Tr(T) Tr(S).

Problem 11C (Traces kind of commute).       PICLet T : V W and S : W V be linear maps between finite-dimensional vector spaces V and W. Show that

Tr(T ∘ S) = Tr(S ∘T ).

Problem 11D (Putnam 1988).       PICLet V be an n-dimensional vector space. Let T : V V be a linear map and suppose there exists n + 1 eigenvectors, any n of which are linearly independent. Does it follow that T is a scalar multiple of the identity?

12  Determinant

The goal of this chapter is to give the basis-free definition of the determinant: that is, we’re going to define detT for T : V V without making reference to the encoding for T. This will make it obvious the determinant of a matrix does not depend on the choice of basis, and that several properties are vacuously true (e.g. that the determinant is multiplicative).

The determinant is only defined for finite-dimensional vector spaces, so if you want you can restrict your attention to finite-dimensional vector spaces for this chapter. On the other hand we do not need the ground field to be algebraically closed.

12.1  Wedge product

Prototypical example for this section: Λ2(2) gives parallelograms.

We’re now going to define something called the wedge product. It will look at first like the tensor product V V , but we’ll have one extra relation.

For simplicity, I’ll first define the wedge product Λ2(V ). But we will later replace 2 with any n.

Definition 12.1.1. Let V be a k-vector space. The 2-wedge product Λ2(V ) is the abelian group generated by elements of the form v w (where v,w V ), subject to the same relations

(v1 + v2) w = v1 w + v2 w
v (w1 + w2) = v w1 + v w2
(c v) w = v (c w)

plus two additional relations:

v ∧ v = 0  and  v ∧ w = − w ∧v.

As a vector space, its action is given by c (v w) = (c v) w = v (c w).

Exercise 12.1.2. Show that the condition v w = (w v) is actually extraneous: you can derive it from the fact that v v = 0. (Hint: expand (v + w) (v + w) = 0.)

This looks almost exactly the same as the definition for a tensor product, with two subtle differences. The first is that we only have V now, rather than V and W as with the tensor product1 Secondly, there is a new mysterious relation

v ∧ v = 0 =⇒  v ∧ w = − (w ∧v ).

What’s that doing there? It seems kind of weird.

I’ll give you a hint.

Example 12.1.3 (Wedge product explicit computation)
Let V = 2, and let v = ae1 + be2, w = ce1 + de2. Now let’s compute v w in Λ2(V ).

v w = (ae1 + be2) (ce1 + de2)
= ac(e1 e1) + bd(e2 e2) + ad(e1 e2) + bc(e2 e1)
= ad(e1 e2) + bc(e2 e1)
= (ad bc)(e1 e2).

What is ad bc? You might already recognize it:

This is absolutely no coincidence. The wedge product is designed to interpret signed areas. That is, v w is meant to interpret the area of the parallelogram formed by v and w. You can see why the condition (cv) w = v (cw) would make sense now. And now of course you know why v v ought to be zero: it’s an area zero parallelogram!

The miracle of wedge products is that the only additional condition we need to add to the tensor product axioms is that v w = (w v). Then suddenly, the wedge will do all our work of interpreting volumes for us.

In analog to earlier:

Proposition 12.1.4 (Basis of Λ2(V ))
Let V be a vector space with basis e1, …, en. Then a basis of Λ2(V ) is

ei ∧ ej

where i < j. Hence Λ2(V ) has dimension (n)
 2.

Proof. Surprisingly slippery, and also omitted. (You can derive it from the corresponding theorem on tensor products.) □

Now I have the courage to define a multi-dimensional wedge product. It’s just the same thing with more wedges.

Definition 12.1.5. Let V be a vector space and m a positive integer. The space Λm(V ) is generated by wedges of the form

v1 ∧ v2 ∧ ⋅⋅⋅∧ vm

subject to relations

⋅⋅⋅(v1 + v2) = (⋅⋅⋅v1 ) + (⋅⋅⋅v2 )
⋅⋅⋅(cv1) v2 = ⋅⋅⋅v1 (cv2)
⋅⋅⋅v v = 0
⋅⋅⋅v w = (⋅⋅⋅w v )

As a vector space

c⋅(v1 ∧ v2 ∧ ⋅⋅⋅∧ vm ) = (cv1)∧ v2 ∧ ⋅⋅⋅∧ vm = v1 ∧ (cv2)∧ ⋅⋅⋅∧ vm = ....

This definition is pretty wordy, but in English the three conditions say

So this is the natural generalization of Λ2(V ). You can convince yourself that any element of the form

⋅⋅⋅∧ v ∧⋅⋅⋅∧ v ∧ ...

should still be zero.

Just like e1 e2 was a basis earlier, we can find the basis for general m and n.

Proposition 12.1.6 (Basis of the wedge product)
Let V be a vector space with basis e1,,en. A basis for Λm(V ) consists of the elements

ei1 ∧ ei2 ∧ ⋅⋅⋅∧ eim

where

1 ≤ i1 < i2 < ⋅⋅⋅ < im ≤ n.

Hence Λm(V ) has dimension ( )
 nm.

Sketch of proof. We knew earlier that ei1 ⋅⋅⋅eim was a basis for the tensor product. Here we have the additional property that (a) if two basis elements re-appear then the whole thing becomes zero, thus we should assume the i’s are all distinct; and (b) we can shuffle around elements, and so we arbitrarily decide to put the basis elements in increasing order. □

12.2  The determinant

Prototypical example for this section: (ae1 + be2) (ce1 + de2) = (ad bc)(e1 e2).

Now we’re ready to define the determinant. Suppose T : V V is a square matrix. We claim that the map Λm(V ) Λm(V ) given on wedges by

v1 ∧ v2 ∧ ⋅⋅⋅∧ vm ↦→ T(v1)∧ T(v2)∧ ⋅⋅⋅∧ T (vm )

and extending linearly to all of Λm(V ) is a linear map. (You can check this yourself if you like.) We call that map Λm(T).

Example 12.2.1 (Example of Λm(T))
In V = 4 with standard basis e1, e2, e3, e4, let T(e1) = e2, T(e2) = 2e3, T(e3) = e3 and T(e4) = 2e2 + e3. Then, for example, Λ2(T) sends

e1 e2 + e3 e4↦→T(e1) T(e2) + T(e3) T(e4)
= e2 2e3 + e3 (2e2 + e3)
= 2(e2 e3 + e3 e2)
= 0.

Now here’s something interesting. Suppose V has dimension n, and let m = n. Then Λn(V ) has dimension (n)
 n = 1 — it’s a one dimensional space! Hence Λn(V )∼
=k.

So Λn(T) can be thought of as a linear map from k to k. But we know that a linear map from k to k is just multiplication by a constant. Hence Λn(T) is multiplication by some constant.

Definition 12.2.2. Let T : V V , where V is an n-dimensional vector space. Then Λn(T) is multiplication by a constant c; we define the determinant of T as c = detT.

Example 12.2.3 (The determinant of a 2 × 2 matrix)
Let V = 2 again with basis e1 and e2. Let

    [    ]
T =  a  c  .
     b  d

In other words, T(e1) = ae1 + be2 and T(e2) = ce1 + de2.

Now let’s consider Λ2(V ). It has a basis e1 e2. Now Λ2(T) sends it to

       Λ2(T)
e1 ∧ e2 ↦−−− −→ T(e1)∧ T (e2) = (ae1 + be2)∧ (ce1 + de2) = (ad− bc)(e1 ∧ e2).

So Λ2(T) : Λ2(V ) Λ2(V ) is multiplication by detT = ad bc, because it sent e1 e2 to (ad bc)(e1 e2).

And that is the definition of a determinant. Once again, since we defined it in terms of Λn(T), this definition is totally independent of the choice of basis. In other words, the determinant can be defined based on T : V V alone without any reference to matrices.

Question 12.2.4. Why does Λn(S T) = Λn(S) Λn(T)?

In this way, we also get

det(S ∘ T) = det(S )det(T)

for free.

More generally if we replace 2 by n, an write out the result of expanding

(a11e1 + a21e2 + ...) ∧ ⋅⋅⋅∧ (a1ne1 + a2ne2 + ⋅⋅⋅+ annen)

then you will get the formula

         ∑
det(A) =     sgn(σ)a1,σ(1)a2,σ(2)...an,σ(n)
         σ∈Sn

called the Leibniz formula for determinants. American high school students will recognize it; this is (unfortunately) taught as the definition of the determinant, rather than a corollary of the better definition using wedge products.

Exercise 12.2.5. Verify that expanding the wedge product yields the Leibniz formula for n = 3.

12.3  Characteristic polynomials, and Cayley-Hamilton

Let’s connect with the theory of eigenvalues. Take a map T : V V , where V is n-dimensional over an algebraically closed field, and suppose its eigenvalues are λ1, λ2, …, λn (with repetition). Then the characteristic polynomial is given by

pT(X ) = (X − λ1)(X − λ2)...(X − λn ).

Note that if we’ve written T in Jordan form, that is,

    ⌊                   ⌋
     λ1   ∗   0  ...   0
    | 0  λ2   ∗  ...   0|
    || 0   0  λ   ...   0||
T = || .   .   .3  .    .||
    ⌈ ..   ..   ..   ..   ..⌉
      0   0   0  ...  λn

(here each is either 0 or 1), then we can hack together the definition

                               ⌊X − λ1     ∗        0    ...     0   ⌋
                               |   0    X  − λ      ∗    ...     0   |
                               ||              2                      ||
pT (X ) := det (X  ⋅idn − T ) = det ||  0.       0.     X −.λ3  ...     0.   ||.
                               ⌈   ..       ..        ..     ...     ..   ⌉
                                   0       0        0    ...  X − λn

The latter definition is what you’ll see in most linear algebra books because it lets you define the characteristic polynomial without mentioning the word “eigenvalue” (i.e. entirely in terms of arrays of numbers). I’ll admit it does have the merit that it means that given any matrix, it’s easy to compute the characteristic polynomial and hence compute the eigenvalues; but I still think the definition should be done in terms of eigenvalues to begin with. For instance the determinant definition obscures the following theorem, which is actually a complete triviality.

Theorem 12.3.1 (Cayley-Hamilton)
Let T : V V be a map of finite-dimensional vector spaces over an algebraically closed field. Then for any T : V V , the map pT (T) is the zero map.

Here, by pT (T) we mean that if

p  (X) = Xn +  c   Xn− 1 + ⋅⋅⋅+ c
 T             n− 1             0

then

          n        n− 1
pT (T) = T  + cn−1T    + ⋅⋅⋅+ c1T + c0I

is the zero map, where Tk denotes T applied k times. We saw this concept already when we proved that T had at least one nonzero eigenvector.

Example 12.3.2 (Example of Cayley-Hamilton using determinant definition)
Suppose T = [1  2]
 3  4. Using the determinant definition of characteristic polynomial, we find that pT (X) = (X 1)(X 4) (2)(3) = X2 5X 2. Indeed, you can verify that

              [      ]      [    ]     [    ]   [    ]
T 2 − 5T − 2 =  7  10  − 5 ⋅ 1  2 − 2 ⋅ 1  0  =  0  0 .
               15  22        3  4       0  1     0  0

If you define pT without the word eigenvalue, and adopt the evil view that matrices are arrays of numbers, then this looks like a complete miracle. (Indeed, just look at the terrible proofs on Wikipedia.)

But if you use the abstract viewpoint of T as a linear map, then the theorem is almost obvious:

Proof of Cayley-Hamilton. Suppose we write V in Jordan normal form as

V  = J1 ⊕ ⋅⋅⋅⊕ Jm

where Ji has eigenvalue λi and dimension di. By definition,

p (T) = (T − λ )d1(T − λ  )d2 ...(T − λ )dm.
 T            1         2           m

By definition, (T λ1)d1 is the zero map on J1. So pT (T) is zero on J1. Similarly it’s zero on each of the other Ji’s — end of story. □

Remark 12.3.3 (Tensoring up) The Cayley-Hamilton theorem holds without the hypothesis that k is algebraically closed: because for example any real matrix can be regarded as a matrix with complex coefficients (a trick we’ve mentioned before). I’ll briefly hint at how you can use tensor products to formalize this idea.

Let’s take the space V = 3, with basis e1, e2, e3. Thus objects in V are of the form r1e1 +r2e2 +r3e3 where r1, r2, r3 are real numbers. We want to consider essentially the same vector space, but with complex coefficients zi rather than real coefficients ri.

So here’s what we do: view as a -vector space (with basis {1,i}, say) and consider the complexification

Vℂ := ℂ ⊗ℝ V.

Then you can check that our elements are actually of the form

z1 ⊗ e1 + z2 ⊗ e2 + z3 ⊗ e3.

Here, the tensor product is over , so we have zrei = (zr)ei for r . Then V can be thought as a three-dimensional vector space over , with basis 1ei for i ∈{1,2,3}. In this way, the tensor product lets us formalize the idea that we “fuse on” complex coefficients.

If T : V W is a map, then T : V W is just the map z v↦→z T(v). You’ll see this written sometimes as T = id T. One can then apply theorems to T and try to deduce the corresponding results on T.

12.4  A few harder problems to think about

Problem 12A (Column operations). Show that for any real numbers xij (here 1 i,j n) we have

   ⌊                  ⌋       ⌊                        ⌋
    x11  x12  ...  x1n         x11 + cx12   x12 ...  x1n
   ||x21  x22  ...  x2n||       ||x21 + cx22   x22 ...  x2n||
det|⌈ ..    ..    ...   .. |⌉ = det |⌈    ..        ..   ...   .. |⌉ .
     .    .         .              .        .        .
    xn1  xn2  ...  xnn         xn1 + cxn2  xn2  ...  xnn

Problem 12B (Determinant is product of eigenvalues). Let V be an n-dimensional vector space over an algebraically closed field k. Let T : V V be a linear map with eigenvalues λ1, λ2, …, λn (counted with algebraic multiplicity). Show that detT = λ1λn.

Problem 12C (Exponential matrix). Let X be an n × n matrix with complex coefficients. We define the exponential map by

                    2    3
exp(X ) = 1 + X + X--+ X-- + ...
                  2!    3!

(take it for granted that this converges to some n × n matrix). Prove that

det(exp(X )) = eTrX .

Problem 12D (Extension to ?? ). Let T : V V be a map of finite-dimensional vector spaces. Prove that T is an isomorphism if and only if detT0.

Problem 12E (Based on Sweden 2010).       PICA herd of 1000 cows of nonzero weight is given. Prove that we can remove one cow such that the remaining 999 cows cannot be split into two halves of equal weights.

Problem 12F (Putnam 2015).     PICPICDefine S to be the set of real matrices (  )
 ac b d such that a, b, c, d form an arithmetic progression in that order. Find all M S such that for some integer k > 1, Mk S.

Problem 12G.     PICPICLet V be a finite-dimensional vector space over k and T : V V . Show that

                  dim V
det(a ⋅id  − T ) = ∑   adimV −n ⋅(− 1)nTr (Λn (T))
         V
                  n=0

where the trace is taken by viewing Λn(T) : Λn(V ) Λn(V ).

13  Inner product spaces

It will often turn out that our vector spaces which look more like n not only have the notion of addition, but also a notion of orthogonality and the notion of distance. All this is achieved by endowing the vector space with a so-called inner form, which you likely already know as the “dot product” for n. Indeed, in n you already know that

The purpose is to quickly set up this structure in full generality. Some highlights of the chapter:

Throughout this chapter, all vector spaces are over or , unless otherwise specified. We’ll generally prefer working over instead of since is algebraically closed (so, e.g. we have Jordan forms). Every real matrix can be thought of as a matrix with complex entries anyways.

13.1  The inner product

Prototypical example for this section: Dot product in n.

13.1.i  For real numbers: bilinear forms

First, let’s define the inner form for real spaces. Rather than the notation v w it is most customary to use ⟨v,w⟩ for general vector spaces.

Definition 13.1.1. Let V be a real vector space. A real inner form1 is a function

⟨∙,∙⟩ : V × V → ℝ

which satisfies the following properties:

Exercise 13.1.2. Show that linearity in the first argument plus symmetry already gives you linearity in the second argument, so we could edit the above definition by only requiring ⟨− ,v⟩ to be linear.

Example 13.1.3 (n)
As we already know, one can define the inner form on n as follows. Let e1 = (1,0,,0), e2 = (0,1,,0), …, en = (0,,0,1) be the usual basis. Then we let

⟨a1e1 + ⋅⋅⋅+  anen,b1e1 + ⋅⋅⋅+ bnen⟩ := a1b1 + ⋅⋅⋅+ anbn.

It’s easy to see this is bilinear (symmetric and linear in both arguments). To see it is positive definite, note that if ai = bi then the dot product is a12 + ⋅⋅⋅ + an2, which is zero exactly when all ai are zero.

13.1.ii  For complex numbers: sesquilinear forms

The definition for a complex product space is similar, but has one difference: rather than symmetry we instead have conjugate symmetry meaning ⟨v,w ⟩ = ⟨w,v⟩. Thus, while we still have linearity in the first argument, we actually have a different linearity for the second argument. To be explicit:

Definition 13.1.4. Let V be a complex vector space. A complex inner product is a function

⟨∙,∙⟩ : V × V → ℂ

which satisfies the following properties:

Exercise 13.1.5. Show that anti-linearity follows from conjugate symmetry plus linearity in the first argument.

Example 13.1.6 (n)
The dot product in n is defined as follows: let e1, e2, …, en be the standard basis. For complex numbers wi, zi we set

⟨w e  + ⋅⋅⋅+ w e  ,ze  + ⋅⋅⋅+ z e ⟩ := w  z-+ ⋅⋅⋅+ w  z-.
  1 1         n  n  1 1        n n      1 1         n n

Question 13.1.7. Check that the above is in fact a complex inner form.

13.1.iii  Inner product space

It’ll be useful to treat both types of spaces simultaneously:

Definition 13.1.8. An inner product space is either a real vector space equipped with a real inner form, or a complex vector space equipped with a complex inner form.

A linear map between inner product spaces is a map between the underlying vector spaces (we do not require any compatibility with the inner form).

Remark 13.1.9 (Why sesquilinear?) The above example explains one reason why we want to satisfy conjugate symmetry rather than just symmetry. If we had tried to define the dot product as wizi, then we would have lost the condition of being positive definite, because there is no guarantee that ⟨v,v⟩ = zi2 will even be a real number at all. On the other hand, with conjugate symmetry we actually enforce ⟨v,v⟩ = ⟨v,v ⟩, i.e. ⟨v,v⟩for every v.

Let’s make this point a bit more forcefully. Suppose we tried to put a bilinear form ⟨− ,− ⟩, on a complex vector space V . Let e be any vector with ⟨e,e⟩ = 1 (a unit vector). Then we would instead get ⟨ie,ie⟩ = ⟨e,e⟩ = 1; this is a vector with length 1, which is not okay! That’s why it is important that, when we have a complex inner product space, our form is sesquilinear, not bilinear.

Now that we have a dot product, we can talk both about the norm and orthogonality.

13.2  Norms

Prototypical example for this section: n becomes its usual Euclidean space with the vector norm.

The inner form equips our vector space with a notion of distance, which we call the norm.

Definition 13.2.1. Let V be an inner product space. The norm of v V is defined by

      ∘ -----
∥v∥ =   ⟨v,v⟩.

This definition makes sense because we assumed our form to be positive definite, so ⟨v,v ⟩ is a nonnegative real number.

Example 13.2.2 (n and n are normed vector spaces)
When V = n or V = n with the standard dot product norm, then the norm of v corresponds to the absolute value that we are used to.

Our goal now is to prove that

With the metric d(v,w) = ∥v − w ∥, V becomes a metric space.

Question 13.2.3. Verify that d(v,w) = 0 if and only if v = w.

So we just have to establish the triangle inequality. Let’s now prove something we all know and love, which will be a stepping stone later:

Lemma 13.2.4 (Cauchy-Schwarz)
Let V be an inner product space. For any v,w V we have

|⟨v,w⟩| ≤ ∥v∥∥w ∥

with equality if and only if v and w are linearly dependent.

Proof. The theorem is immediate if ⟨v,w ⟩ = 0. It is also immediate if ∥v ∥ ∥w ∥ = 0, since then one of v or w is the zero vector. So henceforth we assume all these quantities are nonzero (as we need to divide by them later).

The key to the proof is to think about the equality case: we’ll use the inequality ⟨cv − w,cv − w ⟩ 0. Deferring the choice of c until later, we compute

0 ⟨cv − w,cv − w⟩
= ⟨cv,cv⟩⟨cv,w⟩⟨w,cv⟩ + ⟨w,w ⟩
= |c|2⟨v,v⟩c⟨v,w⟩c⟨w,v⟩ + ⟨w,w ⟩
= |c|2∥v ∥2 + ∥w∥2 c⟨v,w ⟩c⟨v,w ⟩
2Re[c ⟨v,w ⟩] ≤|c|2∥v ∥2 + ∥w∥2
At this point, a good choice of c is

c = ∥w-∥
 ∥v∥|⟨v,w⟩|-
 ⟨v,w⟩
since then
c⟨v,w ⟩ = ∥w ∥
-∥v∥|⟨v,w ⟩|
|c| = ∥w-∥
 ∥v∥
whence the inequality becomes
2∥w∥
----
∥v∥|⟨v,w ⟩| 2∥w∥2
|⟨v,w ⟩| ∥v∥ ∥w ∥.

Thus:

Theorem 13.2.5 (Triangle inequality)
We always have

∥v∥ + ∥w∥ ≥ ∥v + w∥

with equality if and only if v and w are linearly dependent.

Exercise 13.2.6. Prove this by squaring both sides, and applying Cauchy-Schwarz.

In this way, our vector space now has a topological structure of a metric space.

13.3  Orthogonality

Prototypical example for this section: Still n!

Our next goal is to give the geometric notion of “perpendicular”. The definition is easy enough:

Definition 13.3.1. Two nonzero vectors v and w in an inner product space are orthogonal if ⟨v,w ⟩ = 0.

As we expect from our geometric intuition in n, this implies independence:

Lemma 13.3.2 (Orthogonal vectors are independent)
Any set of pairwise orthogonal vectors v1, v2, …, vn, with ∥vi∥0 for each i, is linearly independent.

Proof. Consider a dependence

a1v1 + ⋅⋅⋅+ anvn = 0

for ai in or . Then

      ⟨   ∑      ⟩   ---   2
0V  =  v1,   aivi =  a1∥v1∥ .

Hence a1 = 0, since we assumed ∥v1∥0. Similarly a2 = ⋅⋅⋅ = am = 0. □

In light of this, we can now consider a stronger condition on our bases:

Definition 13.3.3. An orthonormal basis of a finite-dimensional inner product space V is a basis e1, …, en such that ∥ei∥ = 1 for every i and ⟨ei,ej⟩ = 0 for any ij.

Example 13.3.4 (n and n have standard bases)
In n and n equipped with the standard dot product, the standard basis e1, …, en is also orthonormal.

This is no loss of generality:

Theorem 13.3.5 (Gram-Schmidt)
Let V be a finite-dimensional inner product space. Then it has an orthonormal basis.

Sketch of Proof. One constructs the orthonormal basis explicitly from any basis e1, …, en of V . Define proju(v) = ⟨v⟨u,,u⟩u⟩-u. Then recursively define

u1 = e1
u2 = e2 proju1(e2)
u3 = e3 proju1(e3) proju2(e3)
..
.
un = en proju1(en) ⋅⋅⋅ projun1(en).

One can show the ui are pairwise orthogonal and not zero. □

Thus, we can generally assume our bases are orthonormal.

Worth remarking:

Example 13.3.6 (The dot product is the “only” inner form)
Let V be a finite-dimensional inner product space, and consider any orthonormal basis e1,,en. Then we have that

                                    ∑n    --         ∑n   --
⟨a1e1 + ⋅⋅⋅+ anen,b1e1 + ⋅⋅⋅+ bnen⟩ =   aibj ⟨ei,ej⟩ =   aibi
                                    i,j=1             i=1

owing to the fact that the {ei} are orthonormal.

And now you know why the dot product expression is so ubiquitous.

13.4  Hilbert spaces

In algebra we are usually scared of infinity, and so when we defined a basis of a vanilla vector space many chapters ago, we only allowed finite linear combinations. However, if we have an inner product space, then it is a metric space and we can sometimes actually talk about convergence.

Here is how it goes:

Definition 13.4.1. A Hilbert space is a inner product space V , such that the corresponding metric space is complete.

In that case, it will now often make sense to take infinite linear combinations, because we can look at the sequence of partial sums and let it converge. Here is how we might do it. Let’s suppose we have e1, e2, …an infinite sequence of vectors with norm 1 and which are pairwise orthogonal. Suppose c1, c2, …, is a sequence of real or complex numbers. Then consider the sequence

v1 = c1e1
v2 = c1e1 + c2e2
v3 = c1e1 + c2e2 + c3e3
..
.

Proposition 13.4.2 (Convergence criteria in a Hilbert space)
The sequence (vi) defined above converges if and only if |ci|2 < .

Proof. This will make more sense if you read ?? , so you could skip this proof if you haven’t read the chapter. The sequence vi converges if and only if it is Cauchy, meaning that when i < j,

        2        2          2
∥vj − vi∥  = |ci+1| + ⋅⋅⋅+ |cj|

tends to zero as i and j get large. This is equivalent to the sequence sn = |c1|2 +⋅⋅⋅+ |cn|2 being Cauchy.

Since is complete, sn is Cauchy if and only if it converges. Since sn consists of nonnegative real numbers, converges holds if and only if sn is bounded, or equivalently if |c|
  i2 < . □

Thus, when we have a Hilbert space, we change our definition slightly:

Definition 13.4.3. An orthonormal basis for a Hilbert space V is a (possibly infinite) sequence e1, e2, …, of vectors such that

That’s the official definition, anyways. (Note that if dimV < , this agrees with our usual definition, since then there are only finitely many ei.) But for our purposes you can mostly not worry about it and instead think:

A Hilbert space is an inner product space whose basis requires infinite linear combinations, not just finite ones.

The technical condition |ci|2 < is exactly the one which ensures the infinite sum makes sense.

13.5  A few harder problems to think about

Problem 13A (Pythagorean theorem). Show that if ⟨v,w⟩ = 0 in an inner product space, then ∥v∥2 + ∥w∥2 = ∥v + w∥2.

Problem 13B (Finite-dimensional = ⇒ Hilbert). Show that a finite-dimensional inner product space is a Hilbert space.

Problem 13C (Taiwan IMO camp).       PICIn a town there are n people and k clubs. Each club has an odd number of members, and any two clubs have an even number of common members. Prove that k n.

Problem 13D (Inner product structure of tensors). Let V and W be finite-dimensional inner product spaces over k, where k is either or .

(a)
Find a canonical way to make V kW into an inner product space too.
(b)
Let e1, …, en be an orthonormal basis of V and f1, …, fm be an orthonormal basis of W. What’s an orthonormal basis of V W?

Problem 13E (Putnam 2014).       PICLet n be a positive integer. What is the largest k for which there exist n×n matrices M1,,Mk and N1,,Nk with real entries such that for all i and j, the matrix product MiNj has a zero entry somewhere on its diagonal if and only if ij?

Problem 13F (Sequence space). Consider the space 2 of infinite sequences of real numbers a = (a1,a2,) satisfying iai2 < . We equip it with the dot product

        ∑
⟨a,b⟩ =    aibi.
         i

Is this a Hilbert space? If so, identify a Hilbert basis.

Problem 13G (Kuratowski embedding). A Banach space is a normed vector space V , such that the corresponding metric space is complete. (So a Hilbert space is a special case of a Banach space.)

Let (M,d) be any metric space. Prove that there exists a Banach space X and an injective function f : M`→X such that d(x,y) = ∥f (x)− f(y)∥ for any x and y.

14  Bonus: Fourier analysis

Now that we’ve worked hard to define abstract inner product spaces, I want to give an (optional) application: how to set up Fourier analysis correctly, using this language.

For fun, I also prove a form of Arrow’s Impossibility Theorem using binary Fourier analysis.

In what follows, we let 𝕋 = denote the “circle group”, thought of as the additive group of “real numbers modulo 1”. There is a canonical map e: 𝕋 sending 𝕋 to the complex unit circle, given by

e(𝜃) = exp(2πi𝜃).

14.1  Synopsis

Suppose we have a domain Z and are interested in functions f : Z . Naturally, the set of such functions form a complex vector space. We like to equip the set of such functions with an positive definite inner product.

The idea of Fourier analysis is to then select an orthonormal basis for this set of functions, say (eξ)ξ, which we call the characters; the indexing ξ are called frequencies. In that case, since we have a basis, every function f : Z becomes a sum

       ∑  ^
f(x) =    f(ξ)eξ
        ξ

where f(ξ) are complex coefficients of the basis; appropriately we call f the Fourier coefficients. The variable x Z is referred to as the physical variable. This is generally good because the characters are deliberately chosen to be nice “symmetric” functions, like sine or cosine waves or other periodic functions. Thus we decompose an arbitrarily complicated function into a sum of nice ones.

14.2  A reminder on Hilbert spaces

For convenience, we record a few facts about orthonormal bases.

Proposition 14.2.1 (Facts about orthonormal bases)
Let V be a complex Hilbert space with inner form ⟨− ,− ⟩ and suppose x = ξaξeξ and y = ξbξeξ where eξ are an orthonormal basis. Then

⟨x, x⟩ = ξ|aξ|2
aξ = ⟨x, eξ⟩
⟨x, y⟩ = ξaξbξ.

Exercise 14.2.2. Prove all of these. (You don’t need any of the preceding section, it’s only there to motivate the notation with lots of scary ξ’s.)

In what follows, most of the examples will be of finite-dimensional inner product spaces (which are thus Hilbert spaces), but the example of “square-integrable functions” will actually be an infinite dimensional example. Fortunately, as I alluded to earlier, this is no cause for alarm and you can mostly close your eyes and not worry about infinity.

14.3  Common examples

14.3.i  Binary Fourier analysis on 1}n

Let Z = 1}n for some positive integer n, so we are considering functions f(x1,,xn) accepting binary values. Then the functions Z form a 2n-dimensional vector space Z, and we endow it with the inner form

        -1-∑      ----
⟨f,g⟩ = 2n    f (x )g (x ).
           x∈Z

In particular,

        1 ∑        2
⟨f,f ⟩ = 2n    |f (x )|
          x∈Z

is the average of the squares; this establishes also that ⟨− ,− ⟩ is positive definite.

In that case, the multilinear polynomials form a basis of Z, that is the polynomials

                ∏
χS(x1,...,xn) =    xs.
                s∈S

Exercise 14.3.1. Show that they’re actually orthonormal under ⟨− ,− ⟩. This proves they form a basis, since there are 2n of them.

Thus our frequency set is actually the subsets S ⊆{1,,n}. Thus, we have a decomposition

      ∑
f =         f^(S )χS.
    S⊆{1,...,n}

Example 14.3.2 (An example of binary Fourier analysis)
Let n = 2. Then binary functions 1}2 have a basis given by the four polynomials

1,  x1,  x2,  x1x2.

For example, consider the function f which is 1 at (1,1) and 0 elsewhere. Then we can put

f(x1,x2) = x1-+-1 ⋅ x2 +-1= 1(1 + x1 + x2 + x1x2).
              2      2      4

So the Fourier coefficients are f(S) = 1
4 for each of the four S’s.

This notion is useful in particular for binary functions f : 1}n →{±1}; for these functions (and products thereof), we always have ⟨f,f⟩ = 1.

It is worth noting that the frequency plays a special role:

Exercise 14.3.3. Show that

        1 ∑
^f(∅) = ---   f(x).
       |Z |x∈Z

14.3.ii  Fourier analysis on finite groups Z

This time, suppose we have a finite abelian group Z, and consider functions Z ; this is a |Z|-dimensional vector space. The inner product is the same as before:

         1 ∑       ----
⟨f,g⟩ = |Z-|    f(x)g(x).
           x∈Z

To proceed, we’ll need to be able to multiply two elements of Z. This is a bit of a nuisance since it actually won’t really matter what map I pick, so I’ll move briskly; feel free to skip most or all of the remaining paragraph.

Definition 14.3.4. We select a symmetric non-degenerate bilinear form

⋅: Z × Z → 𝕋

satisfying the following properties:

Example 14.3.5 (The form on ∕n)
If Z = ∕nthen ξ x = (ξx)∕n satisfies the above.

In general, it turns out finite abelian groups decompose as the sum of cyclic groups (see ?? ), which makes it relatively easy to find such a ; but as I said the choice won’t matter, so let’s move on.

Now for the fun part: defining the characters.

Proposition 14.3.6 (eξ are orthonormal)
For each ξ Z we define the character

e(x) = e(ξ ⋅x).
ξ

The |Z| characters form an orthonormal basis of the space of functions Z .

Proof. I recommend skipping this one, but it is:

⟨     ⟩
 eξ,eξ′ = -1-
|Z | xZe(ξ x)e(ξ′⋅ x)
= -1-
|Z | xZe(ξ x)e(ξ′⋅ x)
= -1-
|Z | xZe(     ′   )
 (ξ − ξ )⋅x.

In this way, the set of frequencies is also Z, but the ξ Z play very different roles from the “physical” x Z. Here is an example which might be enlightening.

Example 14.3.7 (Cube roots of unity filter)
Suppose Z = 3, with the inner form given by ξ x = (ξx)3. Let ω = exp(2
3πi) be a primitive cube root of unity. Note that

       (
       |{ 1    ξ = 0
eξ(x) =   ωx   ξ = 1
       |(  2x
         ω    ξ = 2.

Then given f : Z with f(0) = a, f(1) = b, f(2) = c, we obtain

                         2                       2
f(x) = a+-b-+-c⋅1 + a-+-ω-b+--ωc⋅ ωx + a+-ωb-+-ω-c-⋅ω2x.
          3              3                  3

In this way we derive that the transforms are

f(0) = a-+-b+-c
    3
f(1) = a + ω2b+  ωc
------------
     3
f(2) =           2
a-+-ωb-+-ω-c
     3.

Exercise 14.3.8. Show that in analogy to f() for binary Fourier analysis, we now have

          ∑
^f(0) =-1-    f(x).
      |Z| x∈Z

Olympiad contestants may recognize the previous example as a “roots of unity filter”, which is exactly the point. For concreteness, suppose one wants to compute

(     )   (     )        (     )
  1000      1000           1000
    0   +    3    + ⋅⋅⋅+   999  .

In that case, we can consider the function

w : ℤ∕3 → ℂ.

such that w(0) = 1 but w(1) = w(2) = 0. By abuse of notation we will also think of w as a function w : 3 . Then the sum in question is

n(1000 )
   nw(n) = n(1000 )
   n k=0,1,2w(k)ωkn
= k=0,1,2w(k) n(    )
 1000
   nωkn
= k=0,1,2w(k)(1 + ωk)n.

In our situation, we have w(0) = w(1) = w(2) = 1
3, and we have evaluated the desired sum. More generally, we can take any periodic weight w and use Fourier analysis in order to interchange the order of summation.

Example 14.3.9 (Binary Fourier analysis)
Suppose Z = 1}n, viewed as an abelian group under pointwise multiplication hence isomorphic to (2)n. Assume we pick the dot product defined by

         ∑
⟨ξ,x⟩ = 1-   ξi −-1-⋅ xi −-1
        2 i    2      2

where ξ = (ξ1,n) and x = (x1,,xn).

We claim this coincides with the first example we gave. Indeed, let S ⊆{1,,n} and let ξ ∈{±1}n which is 1 at positions in S, and +1 at positions not in S. Then the character χS from the previous example coincides with the character eξ in the new notation. In particular, f(S) = f(ξ).

Thus Fourier analysis on a finite group Z subsumes binary Fourier analysis.

14.3.iii  Fourier series for functions L2([π,π])

This is the most famous one, and hence the one you’ve heard of.

Definition 14.3.10. The space L2([π,π]) consists of all functions f : [π,π] such that the integral [π,π]|f(x)|2 dx exists and is finite, modulo the relation that a function which is zero “almost everywhere” is considered to equal zero.1

It is made into an inner product space according to

          ∫         ----
⟨f,g ⟩ =-1-      f (x )g (x ) dx.
       2π  [−π,π]

It turns out (we won’t prove) that this is an (infinite-dimensional) Hilbert space!

Now, the beauty of Fourier analysis is that this space has a great basis:

Theorem 14.3.11 (The classical Fourier basis)
For each integer n, define

en(x ) = exp(inx).

Then en form an orthonormal basis of the Hilbert space L2([π,π]).

Thus this time the frequency set is infinite, and we have

f(x) = ∑  ^f(n) exp (inx )  almost everywhere
        n

for coefficients f(n) with n|    |
||f^(n )||2 < . Since the frequency set is indexed by , we call this a Fourier series to reflect the fact that the index is n .

Exercise 14.3.12. Show once again

         ∫
^f(0) =-1-      f(x ) dx.
      2π  [−π,π]

14.4  Summary, and another teaser

We summarize our various flavors of Fourier analysis in the following table.

------------------------------------------------------------------
 Type          Physical var Frequency var          Basis functions
-Binary--------{±1-}n-------Subsets-S-⊆-{1,...,n-}--∏----x---------
                                                     s∈S  s
 Finite group   Z            ξ ∈ Z, choice of ⋅      e(ξ ⋅x)
 Fourier series 𝕋 or [− π,π]  n ∈ ℤ                  exp(inx )
 Discrete       ℤ∕n ℤ        ξ ∈ ℤ∕nℤ               e(ξx ∕n)

I snuck in a fourth row with Z = ∕n, but it’s a special case of the second row, so no cause for alarm.

Alluding to the future, I want to hint at how ??  starts. Each one of these is really a statement about how functions from G can be expressed in terms of functions G , for some “dual” G. In that sense, we could rewrite the above table as:

------------------------------------------------------
-Name----------Domain--G---Dual-G^--------C∏haracters--
 Binary        {±1 }n      S ⊆  {1,...,n}    s∈S xs
 Finite group   Z           ξ ∈ ^Z ∼= Z      e(iξ ⋅x )
 Fourier series 𝕋 ∼ [− π, π] n ∈ ℤ         exp (inx )
                 =
 Discrete       ℤ∕n ℤ       ξ ∈ ℤ∕n ℤ      e(ξx∕n )

It will turn out that in general we can say something about many different domains G, once we know what it means to integrate a measure. This is the so-called Pontryagin duality; and it is discussed as a follow-up bonus in ?? .

14.5  Parseval and friends

Here is a fun section in which you get to learn a lot of big names quickly. Basically, we can take each of the three results from Proposition 14.2.1, translate it into the context of our Fourier analysis (for which we have an orthonormal basis of the Hilbert space), and get a big-name result.

Corollary 14.5.1 (Parseval theorem)
Let f : Z , where Z is a finite abelian group. Then

∑   ^   2   -1-∑        2
   |f(ξ)| = |Z|    |f (x)|.
 ξ             x∈Z

Similarly, if f : [π,π] is square-integrable then its Fourier series satisfies

∑        2   1  ∫          2
   |^f(n)| =  2π-      |f (x )| dx.
 n               [−π,π]

Proof. Recall that ⟨f,f⟩ is equal to the square sum of the coefficients. □

Corollary 14.5.2 (Fourier inversion formula)
Let f : Z , where Z is a finite abelian group. Then

 ^     -1-∑       -----
f(ξ) = |Z|    f(x)eξ(x).
          x∈Z

Similarly, if f : [π,π] is square-integrable then its Fourier series is given by

          ∫
^f(n) = -1-      f(x)exp (− inx) dx.
       2π  [− π,π]

Proof. Recall that in an orthonormal basis (eξ)ξ, the coefficient of eξ in f is ⟨f, eξ⟩. □

Question 14.5.3. What happens when ξ = 0 above?

Corollary 14.5.4 (Plancherel theorem)
Let f : Z , where Z is a finite abelian group. Then

        ∑
⟨f,g⟩ =    ^f(ξ)^g(ξ).
        ξ∈Z

Similarly, if f : [π,π] is square-integrable then

       ∑       ----
⟨f,g⟩ =    ^f(n)^g(n).
        n

Question 14.5.5. Prove this one in one line (like before).

14.6  Application: Basel problem

One cute application about Fourier analysis on L2([π,π]) is that you can get some otherwise hard-to-compute sums, as long as you are willing to use a little calculus.

Here is the classical one:

Theorem 14.6.1 (Basel problem)
We have

∑   1    π2
   n2-=  6-.
n≥1

The proof is to consider the identity function f(x) = x, which is certainly square-integrable. Then by Parseval, we have

                       ∫
∑  ||^   ||2          -1-            2
   |f(n)| =  ⟨f,f ⟩ = 2π [−π,π]|f(x)|  dx.
n∈ℤ

A calculus computation gives

   ∫
1--       2      π2-
2π       x  dx = 3 .
    [− π,π]

On the other hand, we will now compute all Fourier coefficients. We have already that

          ∫                  ∫
^      -1-                1--
f(0) = 2π  [− π,π]f(x) dx = 2π  [− π,π]x dx = 0.

For n0, we have by definition (or “Fourier inversion formula”, if you want to use big words) the formula

f(n) = ⟨f,exp (inx )⟩
=  1
2π- [π,π]x exp(inx) dx
= -1-
2π [π,π]xexp(inx) dx.

The anti-derivative is equal to 1-
n2 exp(inx)(1 + inx), which thus with some more calculation gives that

^      (− 1)n
f(n) =   n  i.

So

∑  ||    ||2    ∑   1
   |^f(n)| = 2    n2-
 n            n≥1

implying the result.

14.7  Application: Arrow’s Impossibility Theorem

As an application of binary Fourier analysis, we now prove a form of Arrow’s theorem.

Consider n voters voting among 3 candidates A, B, C. Each voter specifies a tuple vi = (xi,yi,zi) ∈{±1}3 as follows:

Tacitly, we only consider 3! = 6 possibilities for vi: we forbid “paradoxical” votes of the form xi = yi = zi by assuming that people’s votes are consistent (meaning the preferences are transitive).

For brevity, let x = (x1,,xn) and define y and z similarly. Then, we can consider a voting mechanism

f : 1}n →{±1}
g: 1}n →{±1}
h: 1}n →{±1}

such that

We’d like to avoid situations where the global preference (f(x),g(y),h(z)) is itself paradoxical.

Let 𝔼f denote the average value of f across all 2n inputs. Define 𝔼g and 𝔼h similarly. We’ll add an assumption that 𝔼f = 𝔼g = 𝔼h = 0, which provides symmetry (and e.g. excludes the possibility that f, g, h are constant functions which ignore voter input). With that we will prove the following result:

Theorem 14.7.1 (Arrow Impossibility Theorem)
Assume that (f,g,h) always avoids paradoxical outcomes, and assume 𝔼f = 𝔼g = 𝔼h = 0. Then (f,g,h) is either a dictatorship or anti-dictatorship: there exists a “dictator” k such that

f(x∙) = ±xk,    g(y∙) = ±yk,     h(z∙) = ±zk

where all three signs coincide.

Unlike the usual Arrow theorem, we do not assume that f(+1,,+1) = +1 (hence possibility of anti-dictatorship).

Proof. Suppose the voters each randomly select one of the 3! = 6 possible consistent votes. In ??  it is shown that the exact probability of a paradoxical outcome for any functions f, g, h is given exactly by

1   1   ∑     (  1) |S|(                               )
-+  --         − --    f^(S)^g(S)+ ^g(S )^h (S)+ ^h(S )f^(S) .
4   4S⊆ {1,...,n}    3

Assume that this probability (of a paradoxical outcome) equals 0. Then, we derive

       ∑       (  1) |S|(                               )
1 =          −  − 3-    f^(S )^g(S)+ ^g(S )^h (S )+ ^h(S )f^(S ) .
    S⊆ {1,...,n}

But now we can just use weak inequalities. We have f() = 𝔼f = 0 and similarly for g and h, so we restrict attention to |S|≥ 1. We then combine the famous inequality |ab + bc + ca|≤ a2 + b2 + c2 (which is true across all real numbers) to deduce that

1 = S⊆{1,,n}(    )
  − 1
   3|S|(                              )
 f^(S )^g (S )+ ^g(S)^h(S )+ ^h(S)f^(S )
S⊆{1,,n}( 1)
  3-|S|(    2       2      2)
 ^f(S)  + ^g(S) + ^h(S )
S⊆{1,,n}(  )
  1-
  31(                    )
 ^f(S)2 + ^g(S )2 + ^h(S )2
= 1-
3(1 + 1 + 1) = 1.

with the last step by Parseval. So all inequalities must be sharp, and in particular f, g, h are supported on one-element sets, i.e. they are linear in inputs. As f, g, h are ±1 valued, each f, g, h is itself either a dictator or anti-dictator function. Since (f,g,h) is always consistent, this implies the final result. □

14.8  A few harder problems to think about

Problem 14A (For calculus fans). Prove that

∑  -1-   π4-
   n4 =  90.
n≥1

Problem 14B.       PIC Let f,g,h: 1}n →{±1} be any three functions. For each i, we randomly select (xi,yi,zi) ∈{±1}3 subject to the constraint that not all are equal (hence, choosing among 23 2 = 6 possibilities). Prove that the probability that

f(x1,...,xn ) = g(y1,...,yn) = h (z1,...,zn)

is given by the formula

              (    )
1-  1-  ∑        1- |S|(^              ^      ^   ^   )
4 + 4           −3      f(S)^g(S) + ^g(S)h(S) + h(S)f(S)
      S⊆{1,...,n}

15  Duals, adjoint, and transposes

This chapter is dedicated to the basis-free interpretation of the transpose and conjugate transpose of a matrix.

Poster corollary: we will see that symmetric matrices with real coefficients are diagonalizable and have real eigenvalues.

15.1  Dual of a map

Prototypical example for this section: The example below.

We go ahead and now define a notion that will grow up to be the transpose of a matrix.

Definition 15.1.1. Let V and W be vector spaces. Suppose T : V W is a linear map. Then we actually get a map

T: W V
f ↦→f T.

This map is called the dual map.

Example 15.1.2 (Example of a dual map)
Work over . Let’s consider V with basis e1, e2, e3 and W with basis f1, f2. Suppose that

T(e1) = f1 + 2f2
T(e2) = 3f1 + 4f2
T(e3) = 5f1 + 6f2.

Now consider V with its dual basis e1, e2, e3 and W with its dual basis f1, f2. Let’s compute T(f1) = f1T: it is given by

f1(T(ae1 + be2 + ce3)) = f1((a + 3b+  5c)f1 + (2a+ 4b + 6c)f2)
= a + 3b + 5c.

So accordingly we can write

T ∨(f∨1 ) = e∨1 + 3e∨2 + 5e∨3

Similarly,

  ∨  ∨      ∨     ∨    ∨
T  (f2 ) = 2e1 + 4e2 + 6e3.

This determines T completely.

If we write the matrices for T and T in terms of our basis, we now see that

     [       ]             ⌊ 1  2⌋
      1  3  5          ∨   ⌈     ⌉
T  =  2  4  6   and   T  =   3  4 .
                             5  6

So in our selected basis, we find that the matrices are transposes: mirror images of each other over the diagonal.

Of course, this should work in general.

Theorem 15.1.3 (Transpose interpretation of T)
Let V and W be finite-dimensional k-vector spaces. Then, for any T : V W, the following two matrices are transposes:

Proof. The (i,j)th entry of the matrix T corresponds to the coefficient of fj in T(ei), which corresponds to the coefficient of ei in fjT. □

The nice part of this is that the definition of T is basis-free. So it means that if we start with any linear map T, and then pick whichever basis we feel like, then T and T will still be transposes.

15.2  Identifying with the dual space

For the rest of this chapter, though, we’ll now bring inner products into the picture.

Earlier I complained that there was no natural isomorphism V ∼
=V . But in fact, given an inner form we can actually make such an identification: that is we can naturally associate every linear map ξ: V k with a vector v V .

To see how we might do this, suppose V = 3 for now with an orthonormal basis e1, e2, e3. How might we use the inner product to represent a map from V ? For example, take ξ V by ξ(e1) = 3, ξ(e2) = 4 and ξ(e3) = 5. Actually, I claim that

ξ(v ) = ⟨v,3e1 + 4e2 + 5e3⟩

for every v.

Question 15.2.1. Check this.

And this works beautifully in the real case.

Theorem 15.2.2 (V ∼=V for real inner form)
Let V be a finite-dimensional real inner product space and V its dual. Then the map V V by

              ∨
v ↦→ ⟨− ,v⟩ ∈ V

is an isomorphism of real vector spaces.

Proof. It suffices to show that the map is injective and surjective.

Actually, since we already know dimV = dimV we only had to prove one of the above. As a matter of personal taste, I find the proof of injectivity more elegant, and the proof of surjectivity more enlightening, so I included both. Thus

If a real inner product space V is given an inner form, then V and V are canonically isomorphic.

Unfortunately, things go awry if V is complex. Here is the results:

Theorem 15.2.3 (V versus V for complex inner forms)
Let V be a finite-dimensional complex inner product space and V its dual. Then the map V V by

              ∨
v ↦→ ⟨− ,v⟩ ∈ V

is a bijection of sets.

Wait, what? Well, the proof above shows that it is both injective and surjective, but why is it not an isomorphism? The answer is that it is not a linear map: since the form is sesquilinear we have for example

iv ↦→ ⟨− ,iv⟩ = − i⟨− ,v⟩

which has introduced a minus sign! In fact, it is an anti-linear map, in the sense we defined before.

Eager readers might try to fix this by defining the isomorphism v↦→⟨v, − ⟩ instead. However, this also fails, because the right-hand side is not even an element of V : it is an “anti-linear”, not linear.

And so we are stuck. Fortunately, we will only need the “bijection” result for what follows, so we can continue on anyways. (If you want to fix this, ??  gives a way to do so.)

15.3  The adjoint (conjugate transpose)

We will see that, as a result of the flipping above, the conjugate transpose is actually the better concept for inner product spaces: since it can be defined using only the inner product without making mention to dual spaces at all.

Definition 15.3.1. Let V and W be finite-dimensional inner product spaces, and let T : V W. The adjoint (or conjugate transpose) of T, denoted T: W V , is defined as follows: for every vector w W, we let T(w) V be the unique vector with

⟨    †   ⟩
  v,T (w)  V = ⟨T(v),w⟩W

for every v V .

Some immediate remarks about this definition:

Example 15.3.2 (Example of an adjoint map)
We’ll work over , so the conjugates are more visible. Let’s consider V with orthonormal basis e1, e2, e3 and W with orthonormal basis f1, f2. We put

T(e1) = if1 + 2f2
T(e2) = 3f1 + 4f2
T(e3) = 5f1 + 6if2.

We compute T(f1). It is the unique vector x V such that

⟨v,x⟩  = ⟨T(v),f1⟩
     V            W

for any v V . If we expand v = ae1 + be2 + ce3 the above equality becomes

⟨ae1 + be2 + ce3,x⟩V = ⟨T(ae1 + be2 + ce3),f1⟩W
= ia + 3b + 5c.

However, since x is in the second argument, this means we actually want to take

T†(f ) = − ie + 3e + 5e
    1       1    2     3

so that the sesquilinearity will conjugate the i.

The pattern continues, though we remind the reader that we need the basis to be orthonormal to proceed.

Theorem 15.3.3 (Adjoints are conjugate transposes)
Fix an orthonormal basis of a finite-dimensional inner product space V . Let T : V V be a linear map. If we write T as a matrix in this basis, then the matrix T (in the same basis) is the conjugate transpose of the matrix of T; that is, the (i,j)th entry of T is the complex conjugate of the (j,i)th entry of T.

Proof. One-line version: take v and w to be basis elements, and this falls right out.

Full proof: let

    ⌊ a   ...  a  ⌋
    |  11.   .    1.n|
T = ⌈  ..   ..   .. ⌉
     an1  ...  ann

in this basis e1, …, en. Then, letting w = ei and v = ej we deduce that

⟨        ⟩                       ⟨         ⟩
 ei,T†(ej) = ⟨T (ei),ej⟩ = aji =⇒   T†(ej),ei  = aji

for any i, which is enough to deduce the result. □

15.4  Eigenvalues of normal maps

We now come to the advertised theorem. Restrict to the situation where T : V V . You see, the world would be a very beautiful place if it turned out that we could pick a basis of eigenvectors that was also orthonormal. This is of course far too much to hope for; even without the orthonormal condition, we saw that Jordan form could still have 1’s off the diagonal.

However, it turns out that there is a complete characterization of exactly when our overzealous dream is true.

Definition 15.4.1. We say a linear map T (from a finite-dimensional inner product space to itself) is normal if TT = TT.

We say a complex T is self-adjoint or Hermitian if T = T; i.e. as a matrix in any orthonormal basis, T is its own conjugate transpose. For real T we say “self-adjoint”, “Hermitian” or symmetric.

Theorem 15.4.2 (Normal diagonalizable with orthonormal basis)
Let V be a finite-dimensional complex inner product space. A linear map T : V V is normal if and only if one can pick an orthonormal basis of eigenvectors.

Exercise 15.4.3. Show that if there exists such an orthonormal basis then T : V V is normal, by writing T as a diagonal matrix in that basis.

Proof. This is long, and maybe should be omitted on a first reading. If T has an orthonormal basis of eigenvectors, this result is immediate.

Now assume T is normal. We first prove T is diagonalizable; this is the hard part.

Claim 15.4.4. If T is normal, then kerT = kerTr = kerT for r 1. (Here Tr is T applied r times.)

Proof of Claim. Let S = TT, which is self-adjoint. We first note that S is Hermitian and kerS = kerT. To see it’s Hermitian, note ⟨Sv,w ⟩ = ⟨Tv,T w⟩ = ⟨v,Sw⟩. Taking v = w also implies kerS kerT (and hence equality since obviously kerT kerS).

First, since we have ⟨             ⟩
 Sr (v),Sr −2(v ) = ⟨               ⟩
 Sr−1(v),Sr−1(v), an induction shows that kerS = kerSr for r 1. Now, since T is normal, we have Sr = (T)r Tr, and thus we have the inclusion

kerT ⊆ kerT r ⊆ ker Sr = kerS = kerT

where the last equality follows from the first claim. Thus in fact kerT = kerTr.

Finally, to show equality with kerT we

⟨T v,Tv ⟩ = ⟨        ⟩
  v,T†T v
= ⟨        ⟩
  v,TT †v
= ⟨        ⟩
  T†v,T †v.

Now consider the given T, and any λ.

Question 15.4.5. Show that (T λid) = Tλid. Thus if T is normal, so is T λid.

In particular, for any eigenvalue λ of T, we find that ker(T λid) = ker(T λid)r. This implies that all the Jordan blocks of T have size 1; i.e. that T is in fact diagonalizable. Finally, we conclude that the eigenvectors of T and T match, and the eigenvalues are complex conjugates.

So, diagonalize T. We just need to show that if v and w are eigenvectors of T with distinct eigenvalues, then they are orthogonal. (We can use Gram-Schmidt on any eigenvalue that appears multiple times.) To do this, suppose T(v) = λv and T(w) = μw (thus T(w) = μw). Then

                            ⟨        ⟩      --
λ⟨v,w ⟩ = ⟨λv,w ⟩ = ⟨T v,w⟩ = v,T †(w ) = ⟨v,μw⟩ = μ ⟨v,w ⟩.

Since λμ, we conclude ⟨v,w⟩ = 0. □

This means that not only can we write

     ⌊                ⌋
      λ1  ...  ...  0
     || 0  λ2   ...  0 ||
T =  |⌈ ..   ..   ...  .. |⌉
       .   .        .
       0   0   ... λn

but moreover that the basis associated with this matrix happens to be orthonormal vectors.

As a corollary:

Theorem 15.4.6 (Hermitian matrices have real eigenvalues)
A Hermitian matrix T is diagonalizable, and all its eigenvalues are real.

Proof. Obviously Hermitian =⇒ normal, so write it in the orthonormal basis of eigenvectors. To see that the eigenvalues are real, note that T = T means λi = λi for every i. □

15.5  A few harder problems to think about

Problem 15A (Double dual).       PIC Let V be a finite-dimensional vector space. Prove that

V (V )
v ↦→(ξ ↦→ ξ(v))

gives an isomorphism. (This is significant because the isomorphism is canonical, and in particular does not depend on the choice of basis. So this is more impressive.)

Problem 15B (Fundamental theorem of linear algebra). Let T : V W be a map of finite-dimensional k-vector spaces. Prove that

dim im T = dim im T ∨ = dim V − dim kerT = dim W  − dim kerT∨.

Problem 15C (Row rank is column rank). A m×n matrix M of real numbers is given. The column rank of M is the dimension of the span in m of its n column vectors. The row rank of M is the dimension of the span in n of its m row vectors. Prove that the row rank and column rank are equal.

Problem 15D (The complex conjugate spaces). Let V = (V,+,) be a complex vector space. Define the complex conjugate vector space, denoted V = (V,+,) by changing just the multiplication:

c∗ v = c⋅V.

Show that for any sesquilinear form on V , if V is finite-dimensional, then

V V
v ↦→⟨− ,v⟩

is an isomorphism of complex vector spaces.

Problem 15E (T vs T). Let V and W be real inner product spaces and let T : V W be an inner product. Show that the following diagram commutes:

SVG-Viewer needed.

Here the isomorphisms are v↦→⟨− ,v ⟩. Thus, for real inner product spaces, T is just T with the duals eliminated (by ?? ).

Problem 15F (Polynomial criteria for normality). Let V be a complex inner product space and let T : V V be a linear map. Show that T is normal if and only if there is a polynomial1 p [t] such that

 †
T  = p(T).

Part V
More on Groups

16  Group actions overkill AIME problems

Consider this problem from the 1996 AIME:

(AIME 1996) Two of the squares of a 7 × 7 checkerboard are painted yellow, and the rest are painted green. Two color schemes are equivalent if one can be obtained from the other by applying a rotation in the plane of the board. How many inequivalent color schemes are possible?

What’s happening here? Let X be the set of the ( )
 429 possible colorings of the board. What’s the natural interpretation of “rotation”? Answer: the group 4= ⟨         ⟩
 r | r4 = 1 somehow “acts” on this set X by sending one state x X to another state r x, which is just x rotated by 90. Intuitively we’re just saying that two configurations are the same if they can be reached from one another by this “action”.

We can make all of this precise using the idea of a group action.

16.1  Definition of a group action

Prototypical example for this section: The AIME problem.

Definition 16.1.1. Let X be a set and G a group. A group action is a binary operation : G × X X which lets a g G send an x X to g x. It satisfies the axioms

Example 16.1.2 (Examples of group actions)
Let G = (G,⋆) be a group.

(a)
The group 4can act on the set of ways to color a 7 × 7 board either yellow or green.
(b)
The group 4= ⟨        ⟩
 r | r4 = 1 acts on the xy-plane 2 as follows: r(x,y) = (y,x). In other words, it’s a rotation by 90.
(c)
The dihedral group D2n acts on the set of ways to color the vertices of an n-gon.
(d)
The group Sn acts on X = {1,2,...,n } by applying the permutation σ: σx : = σ(x).
(e)
The group G can act on itself (i.e. X = G) by left multiplication: put g g: = g ⋆g.

16.2  Stabilizers and orbits

Prototypical example for this section: Again the AIME problem.

Given a group action G on X, we can define an equivalence relation on X as follows: x y if x = g y for some g G. For example, in the AIME problem, means “one can be obtained from the other by a rotation”.

Question 16.2.1. Why is this an equivalence relation?

In that case, the AIME problem wants the number of equivalence classes under . So let’s give these equivalence classes a name: orbits. We usually denote orbits by 𝒪.

As usual, orbits carve out X into equivalence classes.

It turns out that a very closely related concept is:

Definition 16.2.2. The stabilizer of a point x X, denoted StabG(x), is the set of g G which fix x; in other words

StabG (x ) := {g ∈ G | g ⋅x = x} .

Example 16.2.3
Consider the AIME problem again, with X the possible set of states (again G = 4). Let x be the configuration where two opposite corners are colored yellow. Evidently 1G fixes x, but so does the 180 rotation r2. But r and r3 do not preserve x, so StabG(x) = {1,r2}∼=2.

Question 16.2.4. Why is StabG(x) a subgroup of G?

Once we realize the stabilizer is a group, this leads us to what I privately call the “fundamental theorem of how big an orbit is”.

Theorem 16.2.5 (Orbit-stabilizer theorem)
Let 𝒪 be an orbit, and pick any x ∈ 𝒪. Let S = StabG(x) be a subgroup of G. There is a natural bijection between 𝒪 and left cosets. In particular,

|𝒪 ||S | = |G|.

In particular, the stabilizers of each x ∈𝒪 have the same size.

Proof. The point is that every coset gS just specifies an element of 𝒪, namely g x. The fact that S is a stabilizer implies that it is irrelevant which representative we pick.

Since the |𝒪 | cosets partition G, each of size |S|, we obtain the second result. □

16.3  Burnside’s lemma

Now for the crux of this chapter: a way to count the number of orbits.

Theorem 16.3.1 (Burnside’s lemma)
Let G act on a set X. The number of orbits of the action is equal to

-1- ∑  |FixPtg |
|G |
    g∈G

where FixPtg is the set of points x X such that g x = x.

The proof is deferred as a bonus problem, since it has a very olympiad-flavored solution. As usual, this lemma was not actually proven by Burnside; Cauchy got there first, and thus it is sometimes called the lemma that is not Burnside’s. Example application:

Example 16.3.2 (AIME 1996)
Two of the squares of a 7 ×7 checkerboard are painted yellow, and the rest are painted green. Two color schemes are equivalent if one can be obtained from the other by applying a rotation in the plane of the board. How many inequivalent color schemes are possible?

We know that G = 4acts on the set X of (49)
 2 possible coloring schemes. Now we can compute FixPtg explicitly for each g 4.

As |G | = 4, the average is

1176-+-24-+-0+-0 = 300.
        4

Exercise 16.3.3 (MathCounts Chapter Target Round). A circular spinner has seven sections of equal size, each of which is colored either red or blue. Two colorings are considered the same if one can be rotated to yield the other. In how many ways can the spinner be colored? (Answer: 20)

Consult [?] for some more examples of “hands-on” applications.

16.4  Conjugation of elements

Prototypical example for this section: In Sn, conjugacy classes are “cycle types”.

A particularly common type of action is the so-called conjugation. We let G act on itself as follows:

g : h ↦→ ghg −1.

You might think this definition is a little artificial. Who cares about the element ghg1? Let me try to convince you this definition is not so unnatural.

Example 16.4.1 (Conjugacy in Sn)
Let G = S5, and fix a π S5. Here’s the question: is πσπ1 related to σ? To illustrate this, I’ll write out a completely random example of a permutation σ S5.

        1  ↦→   3                       π(1) ↦→   π (3)
        2  ↦→   1                       π(2) ↦→   π (1)
If σ =  3  ↦→   5     then    πσπ −1 =  π(3) ↦→   π (5)
        4  ↦→   2                       π(4) ↦→   π (2)
        5  ↦→   4                       π(5) ↦→   π (4)

Thus our fixed π doesn’t really change the structure of σ at all: it just “renames” each of the elements 1, 2, 3, 4, 5 to π(1), π(2), π(3), π(4), π(5).

But wait, you say. That’s just a very particular type of group behaving nicely under conjugation. Why does this mean anything more generally? All I have to say is: remember Cayley’s theorem! (This was ?? .)

In any case, we may now define:

Definition 16.4.2. The conjugacy classes of a group G are the orbits of G under the conjugacy action.

Let’s see what the conjugacy classes of Sn are, for example.

Example 16.4.3 (Conjugacy classes of Sn correspond to cycle types)
Intuitively, the discussion above says that two elements of Sn should be conjugate if they have the same “shape”, regardless of what the elements are named. The right way to make the notion of “shape” rigorous is cycle notation. For example, consider the permutation

σ  = (1 3 5)(2 4)
 1

in cycle notation, meaning 1↦→3↦→5↦→1 and 2↦→4↦→2. It is conjugate to the permutation

σ2 = (1 2 3)(4 5)

or any other way of relabeling the elements. So, we could think of σ as having conjugacy class

(−  −  − )(− − ).

More generally, you can show that two elements of Sn are conjugate if and only if they have the same “shape” under cycle decomposition.

Question 16.4.4. Show that the number of conjugacy classes of Sn equals the number of partitions of n.

As long as I’ve put the above picture, I may as well also define:

Definition 16.4.5. Let G be a group. The center of G, denoted Z(G), is the set of elements x G such that xg = gx for every g G. More succinctly,

Z (G) := {x ∈ G | gx = xg ∀g ∈ G }.

You can check this is indeed a subgroup of G.

Question 16.4.6. Why is Z(G) normal in G?

Question 16.4.7. What are the conjugacy classes of elements in the center?

A trivial result that gets used enough that I should explicitly call it out:

Corollary 16.4.8 (Conjugacy in abelian groups is trivial)
If G is abelian, then the conjugacy classes all have size one.

16.5  A few harder problems to think about

Problem 16A (PUMaC 2009 C8). Taotao wants to buy a bracelet consisting of seven beads, each of which is orange, white or black. (The bracelet can be rotated and reflected in space.) Find the number of possible bracelets.

Problem 16B. Show that two elements in the same conjugacy class have the same order.

Problem 16C.       PICProve Burnside’s lemma.

Problem 16D (The “class equation”). Let G be a finite group. We define the centralizer CG(g) = {x Gxg = gx} for each g G. Show that

               ∑    |G|
|G | = |Z(G )|+    -------
               s∈S |CG (s)|

where S G is defined as follows: for each conjugacy class C G with |C| > 1, we pick a representative of C and add it to S.

Problem 16E (Classical).       PICAssume G is a finite group and p is the smallest prime dividing its order. Let H be a subgroup of G with |G ||H | = p. Show that H is normal in G.

17  Find all groups

The following problem will hopefully never be proposed at the IMO.

Let n be a positive integer and let S = {1,...,n }. Find all functions f : S ×S S such that

(a)
f(x,1) = f(1,x) = x for all x S.
(b)
f(f(x,y),z) = f(x,f(y,z)) for all x,y,z S.
(c)
For every x S there exists a y S such that f(x,y) = f(y,x) = 1.

Nonetheless, it’s remarkable how much progress we’ve made on this “problem”. In this chapter I’ll try to talk about some things we have accomplished.

17.1  Sylow theorems

Here we present the famous Sylow theorems, some of the most general results we have about finite groups.

Theorem 17.1.1 (The Sylow theorems)
Let G be a group of order pnm, where gcd(p,m) = 1 and p is a prime. A Sylow p-subgroup is a subgroup of order pn. Let np be the number of Sylow p-subgroups of G. Then

(a)
np 1 (mod p). In particular, np0 and a Sylow p-subgroup exists.
(b)
np divides m.
(c)
Any two Sylow p-subgroups are conjugate subgroups (hence isomorphic).

Sylow’s theorem is really huge for classifying groups; in particular, the conditions np 1 (mod p) and npm can often pin down the value of np to just a few values. Here are some results which follow from the Sylow theorems.

Here’s an example of another “practical” application.

Proposition 17.1.2 (Triple product of primes)
If |G| = pqr is the product of distinct primes, then G must have a normal Sylow subgroup.

Proof. WLOG, assume p < q < r. Notice that np 1 (mod p), np|qr and cyclically, and assume for contradiction that np,nq,nr > 1.

Since nr|pq, we have nr = pq since nr divides neither p nor q as nr 1 + r > p,q. Also, np 1 + p and nq 1 + q. So we must have at least 1 + p Sylow p-subgroups, at least 1 + q Sylow q-subgroups, and at least pq Sylow r-subgroups.

But these groups are pretty exclusive.

Question 17.1.3. Take the np + nq + nr Sylow subgroups and consider two of them, say H1 and H2. Show that |H1 ∩ H2 | = 1 as follows: check that H1 H2 is a subgroup of both H1 and H2, and then use Lagrange’s theorem.

We claim that there are too many elements now. Indeed, if we count the non-identity elements contributed by these subgroups, we get

np(p − 1)+ nq(q − 1)+ nr(r − 1) ≥ (1+ p )(p − 1)+ (1+ q)(q − 1)+ pq(r − 1) > pqr

which is more elements than G has! □

17.2  (Optional) Proving Sylow’s theorem

The proof of Sylow’s theorem is somewhat involved, and in fact many proofs exist. I’ll present one below here. It makes extensive use of group actions, so I want to recall a few facts first. If G acts on X, then

Note that when I say x is a fixed point, I mean it is fixed by every element of the group, i.e. the orbit really has size one. Hence that’s a really strong condition.

17.2.i  Definitions

Prototypical example for this section: Conjugacy in Sn.

I’ve defined conjugacy of elements previously, but I now need to define it for groups:

Definition 17.2.1. Let G be a group, and let X denote the set of subgroups of G. Then conjugation is the action of G on X that sends

H  ↦→  gHg −1 = {ghg−1 | h ∈ H }.

If H and K are subgroups of G such that H = gKg1 for some g G (in other words, they are in the same orbit under this action), then we say they are conjugate subgroups.

Because we somehow don’t think of conjugate elements as “that different” (for example, in permutation groups), the following shouldn’t be surprising:

Question 17.2.2. Show that for any subgroup H of a group G, the map H gHg1 by h↦→ghg1 is in fact an isomorphism. This implies that any two conjugate subgroups are isomorphic.

Definition 17.2.3. For any subgroup H of G the normalizer of H is defined as

          {           −1     }
NG (H ) := g ∈ G | gHg   = H   .

In other words, it is the stabilizer of H under the conjugation action.

We are now ready to present the proof.

17.2.ii  Step 1: Prove that a Sylow p-subgroup exists

What follows is something like the probabilistic method. By considering the set X of ALL subsets of size pn at once, we can exploit the “deep number theoretic fact” that

      (     )
        pnm
|X | =   pn   ⁄≡ 0  (mod  p).

(It’s not actually deep: use Lucas’ theorem.)

Here is the proof.

17.2.iii  Step 2: Any two Sylow p-subgroups are conjugate

Let P be a Sylow p-subgroup (which exists by the previous step). We now prove that for any p-group Q, Q gPg1. Note that if Q is also a Sylow p-subgroup, then Q = gPg1 for size reasons; this implies that any two Sylow subgroups are indeed conjugate.

Let Q act on the set of left cosets of P by left multiplication. Note that

Hence some coset gP is a fixed point for every q, meaning qgP = gP for all q. Equivalently, qg gP for all q Q, so Q gPg1 as desired.

17.2.iv  Step 3: Showing np 1 (mod p)

Let 𝒮 denote the set of all the Sylow p-subgroups. Let P ∈𝒮 be arbitrary.

Question 17.2.4. Why does |𝒮| equal np? (In other words, are you awake?)

Now we can proceed with the proof. Let P act on 𝒮 by conjugation. Then:

17.2.v  Step 4: np divides m

Since np 1 (mod p), it suffices to show np divides |G|. Let G act on the set of all Sylow p-groups by conjugation. Step 2 says this action has only one orbit, so the Orbit-Stabilizer theorem implies np divides |G |.

17.3  (Optional) Simple groups and Jordan-Hölder

Prototypical example for this section: Decomposition of 12is 1 2412.

Just like every integer breaks down as the product of primes, we can try to break every group down as a product of “basic” groups. Armed with our idea of quotient groups, the right notion is this.

Definition 17.3.1. A simple group is a group with no normal subgroups other than itself and the trivial group.

Question 17.3.2. For which n is ∕nsimple? (Hint: remember that ∕nis abelian.)

Then we can try to define what it means to “break down a group”.

Definition 17.3.3. A composition series of a group G is a sequence of subgroups H0, H1, …, Hn such that

{1} = H  ⊴  H  ⊴ H  ⊴  ... ⊴ H  = G
        0    1     2         n

of maximal length (i.e. n is as large as possible, but all Hi are of course distinct). The composition factors are the groups H1∕H0, H2∕H1, …, Hn∕Hn1.

You can show that the “maximality” condition implies that the composition factors are all simple groups.

Let’s say two composition series are equivalent if they have the same composition factors (up to permutation); in particular they have the same length. Then it turns out that the following theorem is true.

Theorem 17.3.4 (Jordan-Hölder)
Every finite group G admits a unique composition series up to equivalence.

Example 17.3.5 (Fundamental theorem of arithmetic when n = 12)
Let’s consider the group 12. It’s not hard to check that the possible composition series are

{1}2412 with factors 2, 2, 3
{1}2612 with factors 2, 3, 2
{1}3612 with factors 3, 2, 2.

These correspond to the factorization 12 = 22 3.

This suggests that classifying all finite simple groups would be great progress, since every finite group is somehow a “product” of simple groups; the only issue is that there are multiple ways of building a group from constituents.

Amazingly, we actually have a full list of simple groups, but the list is really bizarre. Every finite simple group falls in one of the following categories:

The two largest of the sporadic groups have cute names. The baby monster group has order

241 ⋅313 ⋅56 ⋅72 ⋅11 ⋅13⋅17 ⋅19⋅23 ⋅31 ⋅47 ≈ 4 ⋅1033

and the monster group (also “friendly giant”) has order

 46  20   9  6    2   3                                        53
2  ⋅3   ⋅5 ⋅7  ⋅11 ⋅13  ⋅17⋅19 ⋅23⋅29 ⋅31 ⋅41⋅47 ⋅59⋅71 ≈ 8 ⋅10  .

It contains twenty of the sporadic groups as subquotients (including itself), and these twenty groups are called the “happy family”.

Math is weird.

Question 17.3.6. Show that “finite simple group of order 2” is redundant in the sense that any group of order 2 is both finite and simple.

17.4  A few harder problems to think about

Problem 17A (Cauchy’s theorem). Let G be a group and let p be a prime dividing |G |. Prove1 that G has an element of order p.

Problem 17B. Let G be a finite simple group. Show that |G|56.

Problem 17C (Engel’s PSS?).       PICConsider the set of all words consisting of the letters a and b. Given such a word, we can change the word either by inserting a word of the form www, where w is a word, anywhere in the given word, or by deleting such a sequence from the word. Can we turn the word ab into the word ba?

Problem 17D.     PICPICLet p be a prime. Show that the only simple group with order pn (for some positive integer n) is the cyclic group ∕pn.

18  The PID structure theorem

The main point of this chapter is to discuss a classification theorem for finitely generated abelian groups. This won’t take long to do, and if you like, you can read just the first section and then move on.

However, since I’m here, I will go ahead and state the result as a special case of the much more general structure theorem. Its corollaries include

18.1  Finitely generated abelian groups

Remark 18.1.1 — We talk about abelian groups in what follows, but really the morally correct way to think about these structures is as -modules.

Definition 18.1.2. An abelian group G = (G,+) is finitely generated if it is finitely generated as a -module. (That is, there exists a finite collection b1,,bm G, such that every x G can be written in the form c1b1 + ⋅⋅⋅ + cmbm for some c1,,cm .)

Example 18.1.3 (Examples of finitely generated abelian groups)

(a)
is finitely generated (by 1).
(b)
∕nis finitely generated (by 1).
(c)
2 is finitely generated (by two elements (1,0) and (0,1)).
(d)
3 92016is finitely generated by five elements.
(e)
35is finitely generated by two elements.

Exercise 18.1.4. In fact 35is generated by one element. What is it?

You might notice that these examples are not very diverse. That’s because they are actually the only examples:

Theorem 18.1.5 (Fundamental theorem of finitely generated abelian groups)
Let G be a finitely generated abelian group. Then there exists an integer r, prime powers q1, …, qm (not necessarily distinct) such that

G  ∼= ℤ⊕r ⊕ ℤ∕q1ℤ ⊕ ℤ∕q2ℤ ⊕ ⋅⋅⋅⊕ ℤ ∕qmℤ.

This decomposition is unique up to permutation of the ∕qi.

Definition 18.1.6. The rank of a finitely generated abelian group G is the integer r above.

Now, we could prove this theorem, but it is more interesting to go for the gold and state and prove the entire structure theorem.

18.2  Some ring theory prerequisites

Prototypical example for this section: R = .

Before I can state the main theorem, I need to define a few terms for UFD’s, which behave much like :

Our intuition from the case R = basically carries over verbatim.

We don’t even need to deal with prime ideals and can factor elements instead.

Definition 18.2.1. If R is a UFD, then p R is a prime element if (p) is a prime ideal and p0. For UFD’s this is equivalent to: if p = xy then either x or y is a unit.

So for example in the set of prime elements is 2,±3,±5,}. Now, since R is a UFD, every element r factors into a product of prime elements

      e1 e2    em
r = up1 p2 ...pm

Definition 18.2.2. We say r divides s if s = rr for some r′∈ R. This is written rs.

Example 18.2.3 (Divisibility in )
The number 0 is divisible by every element of . All other divisibility as expected.

Question 18.2.4. Show that rs if and only if the exponent of each prime in r is less than or equal to the corresponding exponent in s.

Now, the case of interest is the even stronger case when R is a PID:

Proposition 18.2.5 (PID’s are Noetherian UFD’s)
If R is a PID, then it is Noetherian and also a UFD.

Proof. The fact that R is Noetherian is obvious. For R to be a UFD we essentially repeat the proof for , using the fact that (a,b) is principal in order to extract gcd(a,b). □

In this case, we have a Chinese remainder theorem for elements.

Theorem 18.2.6 (Chinese remainder theorem for rings)
Let m and n be relatively prime elements, meaning (m) + (n) = (1). Then

R ∕(mn ) ∼= R ∕(m )× R ∕(n ).

Here the ring product is as defined in ?? .

Proof. This is the same as the proof of the usual Chinese remainder theorem. First, since (m,n) = (1) we have am + bn = 1 for some a and b. Then we have a map

R∕(m )× R ∕(n) → R ∕(mn )  by  (r,s) ↦→ r ⋅bn + s⋅am.

One can check that this map is well-defined and an isomorphism of rings. (Diligent readers invited to do so.) □

Finally, we need to introduce the concept of a Noetherian R-module.

Definition 18.2.7. An R-module M is Noetherian if it satisfies one of the two equivalent conditions:

This generalizes the notion of a Noetherian ring: a Noetherian ring R is one for which R is Noetherian as an R-module.

Question 18.2.8. Check these two conditions are equivalent. (Copy the proof for rings.)

18.3  The structure theorem

Our structure theorem takes two forms:

Theorem 18.3.1 (Structure theorem, invariant form)
Let R be a PID and let M be any finitely generated R-module. Then

     ⊕m
M  ∼=     R ∕(si)
      i=1

for some si (possibly zero) satisfying s1s2sm.

Corollary 18.3.2 (Structure theorem, primary form)
Let R be a PID and let M be any finitely generated R-module. Then

M ∼=  R⊕r ⊕ R∕(q1)⊕ R ∕(q2)⊕ ⋅⋅⋅⊕ R ∕(qm )

where qi = piei for some prime element pi and integer ei 1.

Proof of corollary. Factor each si into prime factors (since R is a UFD), then use the Chinese remainder theorem. □

Remark 18.3.3 — In both theorems the decomposition is unique up to permutations of the summands; good to know, but I won’t prove this.

18.4  Reduction to maps of free R-modules

Definition 18.4.1. A free R-module is a module of the form Rn (or more generally, IR for some indexing set I, just to allow an infinite basis).

The proof of the structure theorem proceeds in two main steps. First, we reduce the problem to a linear algebra problem involving free R-modules Rd. Once that’s done, we just have to play with matrices; this is done in the next section.

Suppose M is finitely generated by d elements. Then there is a surjective map of R-modules

  ⊕d
R   ↠  M

whose image on the basis of Rd are the generators of M. Let K denote the kernel.

We claim that K is finitely generated as well. To this end we prove that

Lemma 18.4.2 (Direct sum of Noetherian modules is Noetherian)
Let M and N be two Noetherian R-modules. Then the direct sum M N is also a Noetherian R-module.

Proof. It suffices to show that if L M N, then L is finitely generated. One guess is that L = P Q, where P and Q are the projections of L onto M and N. Unfortunately this is false (take M = N = and L = {(n,n)n }) so we will have to be more careful.

Consider the submodules

A = {x ∈ M  | (x,0) ∈ L }M
B = {y ∈ N | ∃x ∈ M : (x,y) ∈ L}N.

(Note the asymmetry for A and B: the proof doesn’t work otherwise.) Then A is finitely generated by a1, …, ak, and B is finitely generated by b1, …, b. Let xi = (ai,0) and let yi = (,bi) be elements of L (where the ’s are arbitrary things we don’t care about). Then xi and yi together generate L. □

Question 18.4.3. Deduce that for R a PID, Rd is Noetherian.

Hence K Rd is finitely generated as claimed. So we can find another surjective map Rf K. Consequently, we have a composition              K  ⊂---------

  ⊕f ---------------       ⊕d -------
R             T          R            M
Observe that M is the cokernel of the linear map T, i.e. we have that

M ∼=  R⊕d∕ im (T).

So it suffices to understand the map T well.

18.5  Smith normal form

The idea is now that we have reduced our problem to studying linear maps T : Rm Rn, which can be thought of as a generic matrix

    ⌊             ⌋
     a11  ...  a1m
T = |⌈ ..   ...   .. |⌉
      .         .
     an1  ...  anm

for a basis e1, …, em of Rm and f1, …, fn of N.

Of course, as you might expect it ought to be possible to change the given basis of T such that T has a nicer matrix form. We already saw this in Jordan form, where we had a map T : V V and changed the basis so that T was “almost diagonal”. This time, we have two sets of bases we can change, so we would hope to get a diagonal basis, or even better.

Before proceeding let’s think about how we might edit the matrix: what operations are permitted? Here are some examples:

More generally,

If A is an invertible n × n matrix we can replace T with AT.

This corresponds to replacing

(f1,...,fn) ↦→ (A(f1),...,A(fn))

(the “invertible” condition just guarantees the latter is a basis). Of course similarly we can replace X with XB where B is an invertible m × m matrix; this corresponds to

(e ,...,e  ) ↦→ (B− 1(e ),...,B− 1(e  ))
  1     m           1           m

Armed with this knowledge, we can now approach:

Theorem 18.5.1 (Smith normal form)
Let R be a PID. Let M = Rm and N = Rn be free R-modules and let T : M N be a linear map. Set k = min{m,n}.

Then we can select a pair of new bases for M and N such that T has only diagonal entries s1, s2, …, sk and s1s2sk.

So if m > n, the matrix should take the form

⌊                      ⌋
|s1   0   0   0  ...  0|
| 0  s2   0   0  ...  0| .
|⌈ ...   ...  ...  ...  ...  ...|⌉
  0   0   0  s   ...  0
              n

and similarly when m n.

Question 18.5.2. Show that Smith normal form implies the structure theorem.

Remark 18.5.3 — Note that this is not a generalization of Jordan form.

Example 18.5.4 (Example of Smith normal form)
To give a flavor of the idea of the proof, let’s work through a concrete example with the -matrix

[          ]
 18  38  48  .
 14  30  32

The GCD of all the entries is 2, and so motivated by this, we perform the Euclidean algorithm on the left column: subtract the second row from the first row, then three times the first row from the second:

[          ]   [          ]    [        ]
 18  38  48      4   8  10      4  8  10
 14  30  32 ↦→   14  30  32  ↦→   2  6  2  .

Now that the GCD of 2 is present, we move it to the upper-left by switching the two rows, and then kill off all the entries in the same row/column; since 2 was the GCD all along, we isolate 2 completely:

[        ]    [        ]   [         ]   [         ]
 4  8  10      2  6  2      2   6   2     2   0   0
 2  6   2  ↦→   4  8  10  ↦→  0  − 4  6  ↦→  0  − 4  6 .

This reduces the problem to a 1 × 2 matrix. So we just apply the Euclidean algorithm again there:

[         ]   [         ]   [       ]    [       ]
  2  0   0      2  0   0      2 0  0      2  0  0
  0 − 4  6  ↦→   0 − 4  2  ↦→   0 0  2  ↦→   0  2  0 .

Now all we have to do is generalize this proof to work with any PID. It’s intuitively clear how to do this: the PID condition more or less lets you perform a Euclidean algorithm.

Proof of Smith normal form. Begin with a generic matrix

    ⌊a    ...  a  ⌋
    | 1.1  .     1.m|
T = ⌈ ..    ..   .. ⌉
     an1  ...  anm

We want to show, by a series of operations (gradually changing the given basis) that we can rearrange the matrix into Smith normal form.

Define gcd(x,y) to be any generator of the principal ideal (x,y).

Claim 18.5.5 (“Euclidean algorithm”). If a and b are entries in the same row or column, we can change bases to replace a with gcd(a,b) and b with something else.

Proof. We do just the case of columns. By hypothesis, gcd(a,b) = xa + yb for some x,y R. We must have (x,y) = (1) now (we’re in a UFD). So there are u and v such that xu + yv = 1. Then

[      ][ ]   [          ]
  x  y   a  =   gcd(a,b)
 − v u   b     something

and the first matrix is invertible (check this!), as desired.

Let s1 = (aij)i,j be the GCD of all entries. Now by repeatedly applying this algorithm, we can cause s to appear in the upper left hand corner. Then, we use it to kill off all the entries in the first row and the first column, thus arriving at a matrix

⌊                        ⌋
  s1   0    0   ...   0
| 0   a′22  a′23  ...  a′2n |
|| 0   a′   a′   ...  a′  ||
|| .    3.2   3.3   .    3.n ||.
⌈ ..    ..    ..    ..   ..  ⌉
  0   a′m2  a′m3  ...  a′mn

Now we repeat the same procedure with this lower-right (m 1) × (n 1) matrix, and so on. This gives the Smith normal form. □

With the Smith normal form, we have in the original situation that

M  ∼=  R⊕d∕ im T

and applying the theorem to T completes the proof of the structure theorem.

18.6  A few harder problems to think about

Now, we can apply our structure theorem!

Problem 18A (Finite-dimensional vector spaces are all isomorphic). A vector space V over a field k has a finite spanning set of vectors. Show that V ∼=kn for some n.

Problem 18B (Frobenius normal form). Let T : V V where V is a finite-dimensional vector space over an arbitrary field k (not necessarily algebraically closed). Show that one can write T as a block-diagonal matrix whose blocks are all of the form

⌊                  ⌋
 0  0  0  ...  0  ∗
||1  0  0  ...  0  ∗||
||0  1  0  ...  0  ∗|| .
| ..  ..  ..  ..   ..  ..|
⌈ .  .  .   .  .  .⌉
 0  0  0  ...  1  ∗

(View V as a k[x]-module with action x v = T(v).)

Problem 18C (Jordan normal form). Let T : V V where V is a finite-dimensional vector space over an arbitrary field k which is algebraically closed. Prove that T can be written in Jordan form.

Problem 18D.       PICFind two abelian groups G and H which are not isomorphic, but for which there are injective homomorphisms G`→H and H`→G.

Solution. Take G = 3999 and H = 999 9. Then there are maps G`→H and H`→G, but the groups are not isomorphic since e.g. G has an element g G of order 3 for which there’s no g′∈ G with g = 3g. □

Part VI
Representation Theory

19  Representations of algebras

In the 19th century, the word “group” hadn’t been invented yet; all work was done with subsets of GL(n) or Sn. Only much later was the abstract definition of a group was given, an abstract set G which was an object in its own right.

While this abstraction is good for some reasons, it is often also useful to work with concrete representations. This is the subject of representation theory. Linear algebra is easier than abstract algebra, so if we can take a group G and represent it concretely as a set of matrices in GL(n), this makes them easier to study. This is the representation theory of groups: how can we take a group and represent its elements as matrices?

19.1  Algebras

Prototypical example for this section: k[x1,,xn] and k[G].

Rather than working directly with groups from the beginning, it will be more convenient to deal with so-called k-algebras. This setting is more natural and general than that of groups, so once we develop the theory of algebras well enough, it will be fairly painless to specialize to the case of groups.

Colloquially,

An associative k-algebra is a possibly noncommutative ring with a copy of k inside it. It is thus a k-vector space.

I’ll present examples before the definition:

Example 19.1.1 (Examples of k-Algebras)
Let k be any field. The following are examples of k-algebras:

(a)
The field k itself.
(b)
The polynomial ring k[x1,,xn].
(c)
The set of n × n matrices with entries in k, which we denote by Matn(k). Note the multiplication here is not commutative.
(d)
The set Mat(V ) of linear operators T : V V , with multiplication given by the composition of operators. (Here V is some vector space over k.) This is really the same as the previous example.

Definition 19.1.2. Let k be a field. A k-algebra A is a possibly noncommutative ring, equipped with an injective ring homomorphism k`→A (whose image is the “copy of k”). In particular, 1k↦→1A.

Thus we can consider k as a subset of A, and we then additionally require λa = aλ for each λ k and a A.

If the multiplication operation is also commutative, then we say A is a commutative algebra.

Definition 19.1.3. Equivalently, a k-algebra A is a k-vector space which also has an associative, bilinear multiplication operation (with an identity 1A). The “copy of k” is obtained by considering elements λ1A for each λ k (i.e. scaling the identity by the elements of k, taking advantage of the vector space structure).

Abuse of Notation 19.1.4. Some other authors don’t require A to be associative or to have an identity, so to them what we have just defined is an “associative algebra with 1”. However, this is needlessly wordy for our purposes.

Example 19.1.5 (Group algebra)
The group algebra k[G] is the k-vector space whose basis elements are the elements of a group G, and where the product of two basis elements is the group multiplication. For example, suppose G = 2= {1G,x}. Then

k[G] = {a1  + bx | a,b ∈ k}
          G

with multiplication given by

(a1G + bx)(c1G + dx ) = (ac + bd)1G + (bc+ ad)x.

Question 19.1.6. When is k[G] commutative?

The example k[G] is very important, because (as we will soon see) a representation of the algebra k[G] amounts to a representation of the group G itself.

It is worth mentioning at this point that:

Definition 19.1.7. A homomorphism of k-algebras A, B is a linear map T : A B which respects multiplication (i.e. T(xy) = T(x)T(y)) and which sends 1A to 1B. In other words, T is both a homomorphism as a ring and as a vector space.

Definition 19.1.8. Given k-algebras A and B, the direct sum A B is defined as pairs a + b, where addition is done in the obvious way, but we declare ab = 0 for any a A and b B.

Question 19.1.9. Show that 1A + 1B is the multiplicative identity of A B.

19.2  Representations

Prototypical example for this section: k[S3] acting on k3 is my favorite.

Definition 19.2.1. A representation of a k-algebra A (also a left A-module) is:

(i)
A k-vector space V , and
(ii)
An action of A on V : thus, for every a A we can take v V and act on it to get av. This satisfies the usual axioms:
  • (a + b) v = a v + b v, a (v + w) = a v + a w, and (ab) v = a (b v).
  • λ v = λv for λ k. In particular, 1A v = v.

Definition 19.2.2. The action of A can be more succinctly described as saying that there is a k-algebra homomorphism ρ : A Mat(V ). (So a v = ρ(a)(v).) Thus we can also define a representation of A as a pair

(V,ρ : A → Mat(V )).

This is completely analogous to how a group action G on a set X with n elements just amounts to a group homomorphism G Sn. From this perspective, what we are really trying to do is:

If A is an algebra, we are trying to represent the elements of A as matrices.

Abuse of Notation 19.2.3. While a representation is a pair (V,ρ) of both the vector space V and the action ρ, we frequently will just abbreviate it to “V ”. This is probably one of the worst abuses I will commit, but everyone else does it and I fear the mob.

Abuse of Notation 19.2.4. Rather than ρ(a)(v) we will just write ρ(a)v.

Example 19.2.5 (Representations of Mat(V ))

(a)
Let A = Mat2(). Then there is a representation (2) where a matrix a A just acts by a v = ρ(a)(v) = a(v).
(b)
More generally, given a vector space V over any field k, there is an obvious representation of A = Mat(V ) by a v = ρ(a)(v) = a(v) (since a Mat(V )).

From the matrix perspective: if A = Mat(V ), then we can just represent A as matrices over V .

(c)
There are other representations of A = Mat2(). A silly example is the representation (4) given by
            ⌊a  b  0  0⌋
  [     ]   |          |
ρ : a  b ↦→  |c  d  0  0| .
    c  d    ⌈0  0  a  b⌉
             0  0  c  d

More abstractly, viewing 4 as (2) (2), this is a (v1,v2) = (a v1,a v2).

Example 19.2.6 (Representations of polynomial algebras)

(a)
Let A = k. Then a representation of k is just any k-vector space V .
(b)
If A = k[x], then a representation (V,ρ) of A amounts to a vector space V plus the choice of a linear operator T Mat(V ) (by T = ρ(x)).
(c)
If A = k[x](x2) then a representation (V,ρ) of A amounts to a vector space V plus the choice of a linear operator T Mat(V ) satisfying T2 = 0.
(d)
We can create arbitrary “functional equations” with this pattern. For example, if A = k[x,y](x2 x + y,y4) then representing A by V amounts to finding operators S,T Mat(V ) satisfying S2 = S T and T4 = 0.

Example 19.2.7 (Representations of groups)

(a)
Let A = [S3]. Then let
      ⊕3
V = ℝ    = {(x,y,z) | x, y,z ∈ ℝ }.

We can let A act on V as follows: given a permutation π S3, we permute the corresponding coordinates in V . So for example, if

If π = (1 2) then π ⋅(x,y,z) = (y,x,z).

This extends linearly to let A act on V , by permuting the coordinates.

From the matrix perspective, what we are doing is representing the permutations in S3 as permutation matrices on k3, like

       ⌊       ⌋
        0  1  0
(1 2) ↦→ ⌈1  0  0⌉ .
        0  0  1
(b)
More generally, let A = k[G]. Then a representation (V,ρ) of A amounts to a group homomorphism ψ : G GL(V ). (In particular, ρ(1G) = idV .) We call this a group representation of G.

Example 19.2.8 (Regular representation)
Any k-algebra A is a representation (A,ρ) over itself, with a b = ρ(a)(b) = ab (i.e. multiplication given by A). This is called the regular representation, denoted Reg(A).

19.3  Direct sums

Prototypical example for this section: The example with [S3] seems best.

Definition 19.3.1. Let A be k-algebra and let V = (V,ρV ) and W = (W,ρW ) be two representations of A. Then V W is a representation, with action ρ given by

a⋅(v,w ) = (a ⋅v,a⋅ w).

This representation is called the direct sum of V and W.

Example 19.3.2
Earlier we let Mat2() act on 4 by

            ⌊          ⌋
  [     ]    a  b  0  0
    a  b    |c  d  0  0|
ρ : c  d ↦→  |⌈0  0  a  b|⌉ .
             0  0  c  d

So this is just a direct sum of two two-dimensional representations.

More generally, given representations (V,ρV ) and (W,ρW ) the representation ρ of V W looks like

       [ρV (a )    0  ]
ρ (a) =    0    ρ  (a) .
                W

Example 19.3.3 (Representation of Sn decomposes)
Let A = [S3] again, acting via permutation of coordinates on

      ⊕3
V = ℝ    = {(x,y,z) | x, y,z ∈ ℝ }.

Consider the two subspaces

W1 = {(t,t,t) | t ∈ ℝ }
W2 = {(x, y,z) | x + y + z = 0}.

Note V = W1 W2 as vector spaces. But each of W1 and W2 is a subrepresentation (since the action of A keeps each Wi in place), so V = W1 W2 as representations too.

Direct sums also come up when we play with algebras.

Proposition 19.3.4 (Representations of A B are V A V B)
Let A and B be k-algebras. Then every representation of A B is of the form

VA ⊕ VB

where V A and V B are representations of A and B, respectively.

Sketch of Proof. Let (V,ρ) be a representation of AB. For any v V , ρ(1A +1B)v = ρ(1A)v + ρ(1B)v. One can then set V A = {ρ(1A)vv V } and V B = {ρ(1B)vv V }. These are disjoint, since if ρ(1A)v = ρ(1B)v, we have ρ(1A)v = ρ(1A1A)v = ρ(1A1B)v= 0V , and similarly for the other side. □

19.4  Irreducible and indecomposable representations

Prototypical example for this section: k[S3] decomposes as the sum of two spaces.

One of the goals of representation theory will be to classify all possible representations of an algebra A. If we want to have a hope of doing this, then we want to discard “silly” representations such as

            ⌊ a  b  0  0⌋
   [a  b]   | c  d  0  0|
ρ :       ↦→ |⌈           |⌉
    c  d      0  0  a  b
              0  0  c  d

and focus our attention instead on “irreducible” representations. This motivates:

Definition 19.4.1. Let V be a representation of A. A subrepresentation W V is a subspace W with the property that for any a A and w W, a w W. In other words, this subspace is invariant under actions by A.

Thus for example if V = W1 W2 for representations W1, W2 then W1 and W2 are subrepresentations of V .

Definition 19.4.2. If V has no proper nonzero subrepresentations then it is irreducible. If there is no pair of proper subrepresentations W1, W2 such that V = W1 W2, then we say V is indecomposable.

Definition 19.4.3. For brevity, an irrep of an algebra/group is a finite-dimensional irreducible representation.

Example 19.4.4 (Representation of Sn decomposes)
Let A = [S3] again, acting via permutation of coordinates on

V = ℝ ⊕3 = {(x,y,z) | x, y,z ∈ ℝ }.

Consider again the two subspaces

W1 = {(t,t,t) | t ∈ ℝ }
W2 = {(x, y,z) | x + y + z = 0}.

As we’ve seen, V = W1 W2, and thus V is not irreducible. But one can show that W1 and W2 are irreducible (and hence indecomposable) as follows.

Thus V breaks down completely into irreps.

Unfortunately, if W is a subrepresentation of V , then it is not necessarily the case that we can find a supplementary vector space Wsuch that V = W W. Put another way, if V is reducible, we know that it has a subrepresentation, but a decomposition requires two subrepresentations. Here is a standard counterexample:

Exercise 19.4.5. Let A = [x], and V = 2 be the representation with action

      [    ]
       1  1
ρ(x) = 0  1 .

Show that the only subrepresentation is W = {(t,0)t }. So V is not irreducible, but it is indecomposable.

Here is a slightly more optimistic example, and the “prototypical example” that you should keep in mind.

Exercise 19.4.6. Let A = Matd(k) and consider the obvious representation kd of A that we described earlier. Show that it is irreducible. (This is obvious if you understand the definitions well enough.)

19.5  Morphisms of representations

We now proceed to define the morphisms between representations.

Definition 19.5.1. Let (V,ρV ) and (W,ρW ) be representations of A. An intertwining operator, or morphism, is a linear map T : V W such that

T (a⋅v) = a ⋅T(v)

for any a A, v V . (Note that the first is the action of ρV and the second is the action of ρW .) This is exactly what you expect if you think that V and W are “left A-modules”. If T is invertible, then it is an isomorphism of representations and we say V ∼=W.

Remark 19.5.2 (For commutative diagram lovers) The condition T(av) = aT(v) can be read as saying that    ---ρ1(a)--
 V|           V
  |           |
T |           |T
  |           |
  |           |
 W  ---------W
      ρ2(a)
commutes for any a A.

Remark 19.5.3 (For category lovers) A representation is just a “bilinear” functor from an abelian one-object category {∗} (so Hom(,)∼=A) to the abelian category Vectk. Then an intertwining operator is just a natural transformation.

Here are some examples of intertwining operators.

Example 19.5.4 (Intertwining operators)

(a)
For any λ k, the scalar map T(v) = λv is intertwining.
(b)
If W V is a subrepresentation, then the inclusion W`→V is an intertwining operator.
(c)
The projection map V 1 V 2 V 1 is an intertwining operator.
(d)
Let V = 2 and represent A = k[x] by (V,ρ) where
       [ 0  1]
ρ(x) =  − 1 0  .

Thus ρ(x) is rotation by 90 around the origin. Let T be rotation by 30. Then T : V V is intertwining (the rotations commute).

Exercise 19.5.5 (Kernel and image are subrepresentations). Let T : V W be an intertwining operator.

(a)
Show that kerT V is a subrepresentation of V .
(b)
Show that imT W is a subrepresentation of W.

The previous lemma gives us the famous Schur’s lemma.

Theorem 19.5.6 (Schur’s lemma)
Let V and W be representations of a k-algebra A. Let T : V W be a nonzero intertwining operator. Then

(a)
If V is irreducible, then T is injective.
(b)
If W is irreducible, then T is surjective.

In particular if both V and W are irreducible then T is an isomorphism.

An important special case is if k is algebraically closed: then the only intertwining operators T : V V are multiplication by a constant.

Theorem 19.5.7 (Schur’s lemma for algebraically closed fields)
Let k be an algebraically closed field. Let V be an irrep of a k-algebra A. Then any intertwining operator T : V V is multiplication by a scalar.

Exercise 19.5.8. Use the fact that T has an eigenvalue λ to deduce this from Schur’s lemma. (Consider T λ idV , and use Schur to deduce it’s zero.)

We have already seen the counterexample of rotation by 90 for k = ; this was the same counterexample we gave to the assertion that all linear maps have eigenvalues.

19.6  The representations of Matd(k)

To give an example of the kind of progress already possible, we prove:

Theorem 19.6.1 (Representations of Matd(k))
Let k be any field, d be a positive integer and let W = kd be the obvious representation of A = Matd(k). Then the only finite-dimensional representations of Matd(k) are Wn for some positive integer n (up to isomorphism). In particular, it is irreducible if and only if n = 1.

For concreteness, I’ll just sketch the case d = 2, since the same proof applies verbatim to other situations. This shows that the examples of representations of Mat2() we gave earlier are the only ones.

As we’ve said this is essentially a functional equation. The algebra A = Mat2(k) has basis given by four matrices

     [    ]           [    ]           [    ]          [     ]
E1 =  1  0 ,     E2 =  0  0 ,    E3 =   0  1 ,    E4 =   0  0
      0  0             0  1             0  0             1  0

satisfying relations like E1 + E2 = idA, Ei2 = Ei, E1E2 = 0, etc. So let V be a representation of A, and let Mi = ρ(Ei) for each i; we want to classify the possible matrices Mi on V satisfying the same functional equations. This is because, for example,

id  = ρ(id ) = ρ(E  + E ) = M  + M  .
  V       A       1    2      1    2

By the same token M1M3 = M3. Proceeding in a similar way, we can obtain the following multiplication table:

    |
--×-|M1---M2---M3---M4---
 M1 |M1   0    M3   0
 M2 |0    M2   0    M4      and    M1  + M2 =  idV
 M  |0    M    0    M
   3|       3         1
 M4 |M4   0    M2   0

Note that each Mi is a linear operator V V ; for all we know, it could have hundreds of entries. Nonetheless, given the multiplication table of the basis Ei we get the corresponding table for the Mi.

So, in short, the problem is as follows:

Find all vector spaces V and quadruples of matrices Mi satisfying the multiplication table above.

Let W1 = M1img(V ) and W2 = M2img(V ) be the images of M1 and M2.

Claim 19.6.2. V = W1 W2.

Proof. First, note that for any v V we have

v = ρ(id)(v) = (M1 + M2 )v = M1v + M2v.

Moreover, we have that W1 W2 = {0}, because if M1v1 = M2v2 then M1v1 = M1(M1v1) = M1(M2v2) = 0. □

Claim 19.6.3. W1∼=W2.

Proof. Check that the maps

    ×M4                ×M3
W1  −− −→  W2   and  W2  −−−→ W1

are well-defined and mutually inverse. □

Now, let e1,,en be basis elements of W1; thus M4e1, …, M4en are basis elements of W2. However, each {ej,M4ej} forms a basis of a subrepresentation isomorphic to W = k2 (what’s the isomorphism?).

This finally implies that all representations of A are of the form Wn. In particular, W is irreducible because there are no representations of smaller dimension at all!

19.7  A few harder problems to think about

Problem 19A. Suppose we have one-dimensional representations V 1 = (V 11) and V 2 = (V 22) of A. Show that V 1∼=V 2 if and only if ρ1(a) and ρ2(a) are multiplication by the same constant for every a A.

Problem 19B (Schur’s lemma for commutative algebras). Let A be a commutative algebra over an algebraically closed field k. Prove that any irrep of A is one-dimensional.

Problem 19C. Let (V,ρ) be a representation of A. Then Mat(V ) is a representation of A with action given by

a⋅T = ρ (a) ∘T

for T Mat(V ).

(a)
Show that ρ : Reg(A) Mat(V ) is an intertwining operator.
(b)
If V is d-dimensional, show that Mat(V )∼=V d as representations of A.

Problem 19D. Fix an algebra A. Find all intertwining operators

T : Reg(A ) → Reg(A ).

Problem 19E.       PICLet (V,ρ) be an indecomposable (not irreducible) representation of an algebra A. Prove that any intertwining operator T : V V is either nilpotent or an isomorphism.

(Note that ??  doesn’t apply, since the field k may not be algebraically closed.)

20  Semisimple algebras

In what follows, assume the field k is algebraically closed.

Fix an algebra A and suppose you want to study its representations. We have a “direct sum” operation already. So, much like we pay special attention to prime numbers, we’re motivated to study irreducible representations and then build all the representations of A from there.

Unfortunately, we have seen (?? ) that there exists a representation which is not irreducible, and yet cannot be broken down as a direct sum (indecomposable). This is weird and bad, so we want to give a name to representations which are more well-behaved. We say that a representation is completely reducible if it doesn’t exhibit this bad behavior.

Even better, we say a finite-dimensional algebra A is semisimple if all its finite-dimensional representations are completely reducible. So when we study finite-dimensional representations of semisimple algebras A, we just have to figure out what the irreps are, and then piecing them together will give all the representations of A.

In fact, semisimple algebras A have even nicer properties. The culminating point of the chapter is when we prove that A is semisimple if and only if A∼
= iMat(V i), where the V i are the irreps of A (yes, there are only finitely many!).

20.1  Schur’s lemma continued

Prototypical example for this section: For V irreducible, Homrep(V 2,V 2)∼
=k4.

Definition 20.1.1. For an algebra A and representations V and W, we let Homrep(V,W) be the set of intertwining operators between them. (It is also a k-algebra.)

By Schur’s lemma (since k is algebraically closed, which again, we are taking as a standing assumption), we already know that if V and W are irreps, then

               {k   if V ∼ W
Homrep(V,W  ) ∼=         =
                 0  if V ⁄∼= W .

Can we say anything more? For example, it also tells us that

Homrep (V,V ⊕2) = k⊕2.

The possible maps are v↦→(c1v1,c2v2) for some choice of c1,c2 k.

More generally, suppose V is an irrep and consider Homrep(V m,V n). Intertwining operators are determined completely T : V m V n by the mn choices of compositions                    T
V ⊂------- V ⊕m ------- V⊕n  ------- V
where the first arrow is inclusion to the ith component of V m (for 1 i m) and the second arrow is inclusion to the jth component of V n (for 1 j n). However, by Schur’s lemma on each of these compositions, we know they must be constant.

Thus, Homrep(V n,V m) consist of n×m “matrices” of constants, and the map is provided by

⌊                           ⌋ ⌊   ⌋
  c11   c12   ...   c1(n−1)  c1n    v1
|| c21   c22   ...   c2(n−1)  c1n|| || v2||    ⊕n
|⌈  ..    ..   ...     ..      .. |⌉ |⌈ .. |⌉ ∈ V
   .    .          .      .     .
 cm1  cm2   ...  cm (n−1)  cmn   vn

where the cij k but vi V ; note the type mismatch! This is not just a linear map V ni V mi; rather, the outputs are m linear combinations of the inputs.

More generally, we have:

Theorem 20.1.2 (Schur’s lemma for completely reducible representations)
Let V and W be completely reducible representations, and set V = V ini, W = V imi for integers ni,mi 0, where each V i is an irrep. Then

Hom    (V,W ) ∼= ⊕  Mat      (k)
     rep          i     ni×mi

meaning that an intertwining operator T : V W amounts to, for each i, an ni × mi matrix of constants which gives a map V ini V imi.

Corollary 20.1.3 (Subrepresentations of completely reducible representations)
Let V = V ini be completely reducible. Then any subrepresentation W of V is isomorphic to V imi where mi ni for each i, and the inclusion W`→V is given by the direct sum of inclusion V imi`→V ini, which are ni × mi matrices.

Proof. Apply Schur’s lemma to the inclusion W`→V . □

20.2  Density theorem

We are going to take advantage of the previous result to prove that finite-dimensional algebras have finitely many irreps.

Theorem 20.2.1 (Jacobson density theorem)
Let (V 11), …, (V rr) be pairwise nonisomorphic finite-dimensional representations of A. Then there is a surjective map of vector spaces

⊕r          ⊕r
    ρi : A ↠   Mat (Vi).
i=1         i=1

The right way to think about this theorem is that

Density is the “Chinese remainder theorem” for irreps of A.

Recall that in number theory, the Chinese remainder theorem tells us that given lots of “unrelated” congruences, we can find a single N which simultaneously satisfies them all. Similarly, given lots of different nonisomorphic representations of A, this means that we can select a single a A which induces any tuple (ρ1(a),r(a)) of actions we want — a surprising result, since even the r = 1 case is not obvious at all!

                    ρ1(a)= M1  ∈ Mat (V1)

                    ρ2(a)= M2  ∈ Mat (V2)

|-----|                         .
-a ∈-A-----------------         ..


                    ρr(a)=  Mr ∈ Mat (Vr)

This also gives us the non-obvious corollary

Corollary 20.2.2 (Finiteness of number of representations)
Any finite-dimensional algebra A has at most dimA irreps.

Proof. If V i are such irreps then A iV i dim V i, hence we have the inequality (dimV i)2 dimA. □

Proof of density theorem. Let V = V 1⋅⋅⋅V r, so A acts on V = (V,ρ) by ρ = iρi. Thus by ?? , we can instead consider ρ as an intertwining operator

             ⊕r            ⊕r
ρ : Reg (A ) →    Mat(Vi) ∼=    V ⊕di.
             i=1           i=1  i

We will use this instead as it will be easier to work with.

First, we handle the case r = 1. Fix a basis e1, …, en of V = V 1. Assuming for contradiction that the map is not surjective. Then there is a map of representations (by ρ and the isomorphism) Reg(A) V n given by a↦→(a e1,,a en). By hypothesis is not surjective: its image is a proper subrepresentation of V n. Assume its image is isomorphic to V m for m < n, so by ??  there is a matrix of constants X with Reg(A )--------- V ⊕n  -----X⋅−---⊃ V⊕r


   a ------ (a ⋅e1,...,a⋅en)

     --------             -----|
  1A          (e1,...,en)       (v1,...,vm)
where the two arrows in the top row have the same image; hence the pre-image (v1,,vm) of (e1,,en) can be found. But since r < n we can find constants c1,,cn not all zero such that X applied to the column vector (c1,,cn) is zero:

                      ⌊  ⌋                  ⌊   ⌋
 n                     e1                     v1
∑  c e = [c   ...  c ]| ..| = [c   ...  c ]X |  ..| = 0
    i i    1        n ⌈ .⌉     1        n   ⌈  .⌉
i=1                    en                     vm

contradicting the fact that ei are linearly independent. Hence we conclude the theorem for r = 1.

As for r 2, the image ρimg(A) is necessarily of the form iV iri (by ?? ) and by the above ri = dimV i for each i. □

20.3  Semisimple algebras

Definition 20.3.1. A finite-dimensional algebra A is a semisimple if every finite-dimensional representation of A is completely reducible.

Theorem 20.3.2 (Semisimple algebras)
Let A be a finite-dimensional algebra. Then the following are equivalent:

(i)
A∼
= iMatdi(k) for some di.
(ii)
A is semisimple.
(iii)
Reg(A) is completely reducible.

Proof. (i) =⇒ (ii) follows from ??  and ?? . (ii) =⇒ (iii) is tautological.

To see (iii) =⇒ (i), we use the following clever trick. Consider

Homrep (Reg(A),Reg (A)).

On one hand, by ?? , it is isomorphic to Aop (A with opposite multiplication), because the only intertwining operators Reg(A) Reg(A) are those of the form −⋅ a. On the other hand, suppose that we have set Reg(A) = iV ini. By ?? , we have

  op                            ⊕
A   ∼= Homrep (Reg(A ),Reg(A)) =    Matni ×ni(k).
                                 i

But Matn(k)op∼
=Matn(k) (just by transposing), so we recover the desired conclusion. □

In fact, if we combine the above result with the density theorem (and ?? ), we obtain:

Theorem 20.3.3 (Sum of squares formula)
For a finite-dimensional algebra A we have

∑
   dim (Vi)2 ≤ dim A
 i

where the V i are the irreps of A; equality holds exactly when A is semisimple, in which case

Reg (A) ∼ ⊕  Mat (V ) ∼ ⊕  V⊕ dim Vi.
       =           i =      i
           i            I

Proof. The inequality was already mentioned in ?? . It is equality if and only if the map ρ : A iMat(V i) is an isomorphism; this means all V i are present. □

Remark 20.3.4 (Digression) For any finite-dimensional A, the kernel of the map ρ : A iMat(V i) is denoted Rad(A) and is the so-called Jacobson radical of A; it’s the set of all a A which act by zero in all irreps of A. The usual definition of “semisimple” given in books is that this Jacobson radical is trivial.

20.4  Maschke’s theorem

We now prove that the representation theory of groups is as nice as possible.

Theorem 20.4.1 (Maschke’s theorem)
Let G be a finite group, and k an algebraically closed field whose characteristic does not divide |G|. Then k[G] is semisimple.

This tells us that when studying representations of groups, all representations are completely reducible.

Proof. Consider any finite-dimensional representation (V,ρ) of k[G]. Given a proper subrepresentation W V , our goal is to construct a supplementary G-invariant subspace Wwhich satisfies

V  = W  ⊕ W ′.

This will show that indecomposable irreducible, which is enough to show k[G] is semisimple.

Let π : V W be any projection of V onto W, meaning π(v) = v v W. We consider the averaging map P : V V by

P(v) = -1- ∑  ρ(g−1)∘ π ∘ρ(g).
       |G|
           g∈G

We’ll use the following properties of the map:

Exercise 20.4.2. Show that the map P satisfies:

  • For any w W, P(w) = w.
  • For any v V , P(w) W.
  • The map P : V V is an intertwining operator.

Thus P is idempotent (it is the identity on its image W), so by ??  we have V = kerP imP, but both kerP and imP are subrepresentations as desired. □

Remark 20.4.3 — In the case where k = , there is a shorter proof. Suppose B : V × V is an arbitrary bilinear form. Then we can “average” it to obtain a new bilinear form

            ∑
⟨v,w ⟩ :=-1-    B (g ⋅v,g ⋅w).
        |G| g∈G

The averaged form ⟨− ,− ⟩ is G-invariant, in the sense that ⟨v,w⟩ = ⟨g ⋅v,g ⋅w ⟩. Then, one sees that if W V is a subrepresentation, so is its orthogonal complement W. This implies the result.

20.5  Example: the representations of [S3]

We compute all irreps of [S3]. I’ll take for granted right now there are exactly three such representations (which will be immediate by the first theorem in the next chapter: we’ll in fact see that the number of representations of G is exactly equal to the number of conjugacy classes of G).

Given that, if the three representations of have dimension d1, d2, d3 , then we ought to have

d21 + d22 + d23 = |G| = 6.

From this, combined with some deep arithmetic, we deduce that we should have d1 = d2 = 1 and d3 = 2 or some permutation.

In fact, we can describe these representations explicitly. First, we define:

Definition 20.5.1. Let G be a group. The complex trivial group representation of a group G is the one-dimensional representation triv = () where g v = v for all g G and v (i.e. ρ(g) = id for all g G).

Remark 20.5.2 (Warning) The trivial representation of an algebra A doesn’t make sense for us: we might want to set a v = v but this isn’t linear in A. (You could try to force it to work by deleting the condition 1A v = v from our definition; then one can just set a v = 0. But even then triv would not be the trivial representation of k[G].)

Then the representations are:

This implies that these are all the irreps of S3. Note that, if we take the representation V of S3 on k3, we just get that V = refl0 triv.

20.6  A few harder problems to think about

Problem 20A. Find all the irreps of [∕n].

Problem 20B (Maschke requires |G| finite). Consider the representation of the group on 2 under addition by a homomorphism

                       [1  t]
ℝ → Mat2 (ℂ)  by   t ↦→  0  1 .

Show that this representation is not irreducible, but it is indecomposable.

Problem 20C. Prove that all irreducible representations of a finite group are finite-dimensional.

Problem 20D.       PICDetermine all the complex irreps of D10.

21  Characters

Characters are basically the best thing ever. To every representation V of A we will attach a so-called character χV : A k. It will turn out that the characters of irreps of V will determine the representation V completely. Thus an irrep is just specified by a set of dimA numbers.

21.1  Definitions

Definition 21.1.1. Let V = (V,ρ) be a finite-dimensional representation of A. The character χV : A k attached to A is defined χV = Trρ, i.e. 

χV(a) := Tr (ρ(a) : V → V ).

Since Tr and ρ are additive, this is a k-linear map (but it is not multiplicative). Note also that χV W = χV + χW for any representations V and W.

We are especially interested in the case A = k[G], of course. As usual, we just have to specify χV (g) for each g S3 to get the whole map k[G] k. Thus we often think of χV as a function G k, called a character of the group G. Here is the case G = S3:

Example 21.1.2 (Character table of S3)
Let’s consider the three irreps of G = S3 from before. For triv all traces are 1; for sign the traces are ±1 depending on sign (obviously, for one-dimensional maps k k the trace “is” just the map itself). For refl0 we take a basis (1,0,1) and (0,1,1), say, and compute the traces directly in this basis.

|------|-------------------------------------|
|χV-(g)|id--(1 2)-(2-3)-(3-1)--(1-2-3)-(3-2 1)|
| ℂtriv | 1     1     1      1      1       1 |
| ℂsign | 1    − 1   − 1   − 1      1       1 |
---refl0---2-----0-----0------0-----− 1-----− 1-

The above table is called the character table of the group G. The table above has certain mysterious properties, which we will prove as the chapter progresses.

(I)
The value of χV (g) only depends on the conjugacy class of g.
(II)
The number of rows equals the number of conjugacy classes.
(III)
The sum of the squares of any row is 6 again!
(IV)
The “dot product” of any two rows is zero.

Abuse of Notation 21.1.3. The name “character” for χV : G k is a bit of a misnomer. This χV is not multiplicative in any way, as the above example shows: one can almost think of it as an element of k⊕|G|.

Question 21.1.4. Show that χV (1A) = dimV , so one can read the dimensions of the representations from the leftmost column of a character table.

21.2  The dual space modulo the commutator

For any algebra, we first observe that since Tr(TS) = Tr(ST), we have for any V that

χV(ab) = χV(ba).

This explains observation (I) from earlier:

Question 21.2.1. Deduce that if g and h are in the same conjugacy class of a group G, and V is a representation of [G], then χ(g) = χ(h).

Now, given our algebra A we define the commutator [A,A] to be the (two-sided) ideal1 generated by elements of the form xy yx. Thus [A,A] is contained in the kernel of each χV .

Definition 21.2.2. The space A∕[A,A] is called the abelianization of A; for brevity we denote it as Aab. We think of this as “A modulo the relation ab = ba for each a,b A.”

So we can think of each character χV as an element of (Aab) .

Example 21.2.3 (Examples of abelianizations)

(a)
If A is commutative, then [A,A] = {0} and Aab=A.
(b)
If A = Matk(d), then [A,A] consists exactly of the d × d matrices of trace zero. (Proof: harmless exercise.) Consequently, Aab is one-dimensional.
(c)
Suppose A = k[G]. We claim that dimAab is equal to the number of conjugacy classes of A. Indeed, an element of A can be thought of as just an arbitrary function ξ : G k. So an element of Aab is a function ξ : G k such that ξ(gh) = ξ(hg) for every g,h G. This is equivalent to functions from conjugacy classes of G to k.

Theorem 21.2.4 (Character of representations of algebras)
Let A be an algebra over an algebraically closed field. Then

(a)
Characters of pairwise non-isomorphic irreps are linearly independent as elements of Aab.
(b)
If A is finite-dimensional and semisimple, then the characters attached to irreps form a basis of Aab.

In particular, in (b) the number of irreps of A equals dimAab.

Proof. Part (a) is more or less obvious by the density theorem. Suppose there is a linear dependence, so that for every a we have

c1χV1(a)+ c2χV2(a) + ⋅⋅⋅+ crχVr (a) = 0

for some integer r.

Question 21.2.5. Deduce that c1 = ⋅⋅⋅ = cr = 0 from the density theorem.

For part (b), assume there are r irreps we may assume that

      r
    ⊕
A =     Mat (Vi)
     i=1

where V 1, …, V r are the irreps of A. Since we have already showed the characters are linearly independent we need only show that dim(A∕[A,A]) = r, which follows from the observation earlier that each Mat(V i) has a one-dimensional abelianization. □

Since G has dim [G]ab conjugacy classes, this completes the proof of (II).

21.3  Orthogonality of characters

Now we specialize to the case of finite groups G, represented over .

Definition 21.3.1. Let Classes(G) denote the set conjugacy classes of G.

If G has r conjugacy classes, then it has r irreps. Each (finite-dimensional) representation V , irreducible or not, gives a character χV .

Abuse of Notation 21.3.2. From now on, we will often regard χV as a function G or as a function Classes(G) . So for example, we will write both χV (g) (for g G) and χV (C) (for a conjugacy class C); the latter just means χV (gC) for any representative gC C.

Definition 21.3.3. Let Funclass(G) denote the set of functions Classes(G) viewed as a vector space over . We endow it with the inner form

            ∑        -----
⟨f1,f2⟩ = -1-    f1(g)f2(g).
         |G|g∈G

This is the same “dot product” that we mentioned at the beginning, when we looked at the character table of S3. We now aim to prove the following orthogonality theorem, which will imply (III) and (IV) from earlier.

Theorem 21.3.4 (Orthogonality)
For any finite-dimensional complex representations V and W of G we have

⟨χV ,χW ⟩ = dim Homrep (W, V).

In particular, if V and W are irreps then

           {
             1  V ∼= W
⟨χV,χW  ⟩ =  0  otherwise.

Corollary 21.3.5 (Irreps give an orthonormal basis)
The characters associated to irreps form an orthonormal basis of Funclass(G).

In order to prove this theorem, we have to define the dual representation and the tensor representation, which give a natural way to deal with the quantity χV (g)χW (g).

Definition 21.3.6. Let V = (V,ρ) be a representation of G. The dual representation V is the representation on V with the action of G given as follows: for each ξ V , the action of g gives a g ξ V specified by

v ↦−g⋅−→ξ ξ(ρ(g− 1)(v)).

Definition 21.3.7. Let V = (V,ρV ) and W = (W,ρW ) be group representations of G. The tensor product of V and W is the group representation on V W with the action of G given on pure tensors by

g ⋅(v ⊗ w ) = (ρV (g)(v))⊗ (ρW (g)(w))

which extends linearly to define the action of G on all of V W.

Remark 21.3.8 — Warning: the definition for tensors does not extend to algebras. We might hope that a (v w) = (a v) (a w) would work, but this is not even linear in a A (what happens if we take a = 2, for example?).

Theorem 21.3.9 (Character traces)
If V and W are finite-dimensional representations of G, then for any g G.

(a)
χV W (g) = χV (g) + χW (g).
(b)
χV W (g) = χV (g) χW (g).
(c)
χV (g) = χV (g).

Proof. Parts (a) and (b) follow from the identities Tr(S T) = Tr(S) + Tr(T) and Tr(S T) = Tr(S)Tr(T). However, part (c) is trickier. As (ρ(g))|G| = ρ(g|G|) = ρ(1G) = idV by Lagrange’s theorem, we can diagonalize ρ(g), say with eigenvalues λ1, …, λn which are |G|th roots of unity, corresponding to eigenvectors e1, …, en. Then we see that in the basis e1, …, en, the action of g on V has eigenvalues λ11, λ21, …, λn1. So

        ∑n                    ∑n   −1   ∑n --
χV (g) =    λi  and  χV ∨(g) =    λi  =    λi
         i=1                    i=1       i=1

where the last step follows from the identity |z| = 1 z1 = z. □

Remark 21.3.10 (Warning) The identities (b) and (c) do not extend linearly to [G], i.e. it is not true for example that χV (a) = χV (a) if we think of χV as a map [G] .

Proof of orthogonality relation. The key point is that we can now reduce the sums of products to just a single character by

     ------
χV(g)χW (g) = χV⊗W ∨(g).

So we can rewrite the sum in question as just

                                      (         )
             1 ∑                         1  ∑
⟨χV ,χW ⟩ = ---    χV⊗W ∨ (g) = χV ⊗W ∨( ---    g) .
            |G |g∈G                      |G| g∈G

Let P : V WV W be the action of -1-
|G | gGg, so we wish to find TrP.

Exercise 21.3.11. Show that P is idempotent. (Compute P P directly.)

Hence V W = kerP imP (by ?? ) and imP is the subspace of elements which are fixed under G. From this we deduce that

Tr P = dim im P = dim {x ∈ V ⊗ W ∨ | g ⋅x = x ∀g ∈ G } .

Now, consider the natural isomorphism V W Hom(W,V ).

Exercise 21.3.12. Let g G. Show that under this isomorphism, T Hom(W,V ) satisfies g T = T if and only if T(g w) = g T(w) for each w W. (This is just unwinding three or four definitions.)

Consequently, χV W(P) = TrP = dimHomrep(W,V ) as desired. □

The orthogonality relation gives us a fast and mechanical way to check whether a finite-dimensional representation V is irreducible. Namely, compute the traces χV (g) for each g G, and then check whether ⟨χV ,χV ⟩ = 1. So, for example, we could have seen the three representations of S3 that we found were irreps directly from the character table. Thus, we can now efficiently verify any time we have a complete set of irreps.

21.4  Examples of character tables

Example 21.4.1 (Dihedral group on 10 elements)
Let D10 = ⟨r,s | r5 = s2 = 1,rs = sr−1⟩. Let ω = exp(2πi
 5). We write four representations of D10:

We claim that these four representations are irreducible and pairwise non-isomorphic. We do so by writing the character table:

|-----|-------------------------|
-D10---1---r,r4-----r2,r3---srk--
|ℂtriv |1     1        1       1 |
|ℂ    |1     1        1     − 1 |
| sign |         4   2    3      |
| V1  |2  ω2+ ω 3  ω  + ω4    0 |
--V2---2--ω--+-ω---ω-+-ω------0--

Then a direct computation shows the orthogonality relations, hence we indeed have an orthonormal basis. For example, ⟨ℂtriv,ℂsign⟩ = 1 + 2 1 + 2 1 + 5 (1) = 0.

Example 21.4.2 (Character table of S4)
We now have enough machinery to to compute the character table of S4, which has five conjugacy classes (corresponding to cycle types id, 2, 3, 4 and 2+2). First of all, we note that it has two one-dimensional representations, triv and sign, and these are the only ones (because there are only two homomorphisms S4 ×). So thus far we have the table

|-----|--------------------------------------|
|-S4--|1--(∙-∙)--(∙-∙--∙)-(∙-∙-∙-∙)--(∙-∙)(∙ ∙)|
|ℂtriv |1      1       1          1         1 |
|ℂsign|1    − 1       1        − 1         1 |
|  ..  |                   ..                  |
   .                      .

Note the columns represent 1 + 6 + 8 + 6 + 3 = 24 elements.

Now, the remaining three representations have dimensions d1, d2, d3 with

 2   2    2
d1 + d2 + d3 = 4!− 2 = 22

which has only (d1,d2,d3) = (2,3,3) and permutations. Now, we can take the refl0 representation

{(w,x,y,z) | w + x + y + z = 0}

with basis (1,0,0,1), (0,1,0,1) and (0,0,1,1). This can be geometrically checked to be irreducible, but we can also do this numerically by computing the character directly (this is tedious): it comes out to have 3, 1, 0, 1, 1 which indeed gives norm

                 (                                         )

⟨χrefl ,χrefl ⟩ =-1 |( 32  +6 ⋅(1)2+ 8⋅(0)2+ 6 ⋅(− 1)2+ 3 ⋅(− 1)2|) = 1.
    0     0   4!  ◟◝◜◞  ◟--◝◜-◞  ◟-◝◜-◞  ◟--◝◜--◞  ◟--◝◜--◞
                    id     (∙∙)    (∙ ∙∙)   (∙∙∙ ∙)   (∙∙)(∙∙)

Note that we can also tensor this with the sign representation, to get another irreducible representation (since sign has all traces ±1, the norm doesn’t change). Finally, we recover the final row using orthogonality (which we name 2, for lack of a better name); hence the completed table is as follows.

|-----------|--------------------------------------|
-----S4------1--(∙-∙)-(∙-∙-∙)--(∙-∙-∙-∙)--(∙-∙)(∙ ∙)|
|   ℂtriv    |1     1        1          1         1 |
|   ℂ       |1    − 1       1        − 1         1 |
|     sig2n   |                                      |
|    ℂ      |2     0       − 1         0         2 |
|   refl0    |3     1        0        − 1       − 1 |
-refl0⊗-ℂsign--3----− 1-------0----------1-------−-1-|

21.5  A few harder problems to think about

Problem 21A (Reading decompositions from characters). Let W be a complex representation of a finite group G. Let V 1, …, V r be the complex irreps of G and set ni = ⟨χW ,χVi⟩. Prove that each ni is a non-negative integer and

     ⊕r   ⊕ni
W  =    V i  .
     i=1

Problem 21B. Consider complex representations of G = S4. The representation refl0 refl0 is 9-dimensional, so it is clearly reducible. Compute its decomposition in terms of the five irreducible representations.

Problem 21C (Tensoring by one-dimensional irreps). Let V and W be irreps of G, with dimW = 1. Show that V W is irreducible.

Problem 21D (Quaternions). Compute the character table of the quaternion group Q8.

Problem 21E (Second orthogonality formula).       PICLet g and h be elements of a finite group G, and let V 1, …, V r be the irreps of G. Prove that

∑r       ------   {|C  (g)| if g and h are conjugates
    χVi(g)χVi(h) =     G
i=1                0        otherwise.

Here, CG(g) = {x ∈ G : xg = gx} is the centralizer of g.

22  Some applications

With all this setup, we now take the time to develop some nice results which are of independent interest.

22.1  Frobenius divisibility

Theorem 22.1.1 (Frobenius divisibility)
Let V be a complex irrep of a finite group G. Then dimV divides |G|.

The proof of this will require algebraic integers (developed in the algebraic number theory chapter). Recall that an algebraic integer is a complex number which is the root of a monic polynomial with integer coefficients, and that these algebraic integers form a ring under addition and multiplication, and that = .

First, we prove:

Lemma 22.1.2 (Elements of [G] are integral)
Let α [G]. Then there exists a monic polynomial P with integer coefficients such that P(α) = 0.

Proof. Let Ak be the -span of 11,k. Since [G] is Noetherian, the inclusions A0 A1 A2 cannot all be strict, hence Ak = Ak+1 for some k, which means αk+1 can be expressed in terms of lower powers of α. □

Proof of Frobenius divisibility. Let C1, …, Cm denote the conjugacy classes of G. Then consider the rational number

-|G|--
dim V ;

we will show it is an algebraic integer, which will prove the theorem. Observe that we can rewrite it as

                                ------
 |G|    |G |⟨χV ,χV ⟩   ∑  χV (g)χV(g)
dim--V-= ---dim-V----=     ---dim-V---.
                       g∈G

We split the sum over conjugacy classes, so

  |G |    ∑m -------  |Ci|χV (Ci)
dim-V- =    χV (Ci)⋅ --dim-V----.
         i=1

We claim that for every i,

|C |χ  (C )     1
--i--V--i- = ------Tr Ti
  dim V      dim V

is an algebraic integer, where

       (      )
         ∑
Ti :=  ρ(     h) .
         h∈Ci

To see this, note that Ti commutes with elements of G, and hence is an intertwining operator Ti : V V . Thus by Schur’s lemma, Ti = λi idV and TrT = λi dimV . By ?? , λi , as desired.

Now we are done, since χV (Ci) too (it is the sum of conjugates of roots of unity), so d|Gim-|V- is the sum of products of algebraic integers, hence itself an algebraic integer. □

22.2  Burnside’s theorem

We now prove a group-theoretic result. This is the famous poster child for representation theory (in the same way that RSA is the poster child of number theory) because the result is purely group theoretic.

Recall that a group is simple if it has no normal subgroups. In fact, we will prove:

Theorem 22.2.1 (Burnside)
Let G be a nonabelian group of order paqb (where p,q are distinct primes and a,b 0). Then G is not simple.

In what follows p and q will always denote prime numbers.

Lemma 22.2.2 (On gcd(|C|,dimV ) = 1)
Let V = (V,ρ) be an complex irrep of G. Assume C is a conjugacy class of G with gcd(|C|,dimV ) = 1. Then for any g C, either

Proof. If 𝜀i are the n eigenvalues of ρ(g) (which are roots of unity), then from the proof of Frobenius divisibility we know |C|
 nχV (g) , thus from gcd(|C|,n) = 1 we get

1         1                --
--χV(g) = --(𝜀1 + ⋅⋅⋅+ 𝜀n) ∈ ℤ.
n         n

So this follows readily from a fact from algebraic number theory, namely ?? : either 𝜀1 = ⋅⋅⋅ = 𝜀n (first case) or 𝜀1 + ⋅⋅⋅ + 𝜀n = 0 (second case). □

Lemma 22.2.3 (Simple groups don’t have prime power conjugacy classes)
Let G be a finite simple group. Then G cannot have a conjugacy class of order pk (where k > 0).

Proof. By contradiction. Assume C is such a conjugacy class, and fix any g C. By the second orthogonality formula (?? ) applied g and 1G (which are not conjugate since g1G) we have

  r
∑
    dim ViχVi(g) = 0
 i=1

where V i are as usual all irreps of G.

Exercise 22.2.4. Show that there exists a nontrivial irrep V such that p dimV and χV (g)0. (Proceed by contradiction to show that 1
p if not.)

Let V = (V,ρ) be the irrep mentioned. By the previous lemma, we now know that ρ(g) acts as a scalar in V .

Now consider the subgroup

     ⟨             ⟩
H  =  ab−1 | a,b ∈ C ⊆ G.

We claim this is a nontrivial normal subgroup of G. It is easy to check H is normal, and since |C| > 1 we have that H is nontrivial. As represented by V each element of H acts trivially in G, so since V is nontrivial and irreducible, HG. This contradicts the assumption that G was simple. □

With this lemma, Burnside’s theorem follows by partitioning the |G| elements of our group into conjugacy classes. Assume for contradiction G is simple. Each conjugacy class must have order either 1 (of which there are |Z(G)| by ?? ) or divisible by pq (by the previous lemma), but on the other hand the sum equals |G| = paqb. Consequently, we must have |Z(G)| > 1. But G is not abelian, hence Z(G)G, thus the center Z(G) is a nontrivial normal subgroup, contradicting the assumption that G was simple.

22.3  Frobenius determinant

We finish with the following result, the problem that started the branch of representation theory. Given a finite group G, we create n variables {xg}gG, and an n×n matrix MG whose (g,h)th entry is xgh.

Example 22.3.1 (Frobenius determinants)

(a)
If G = 2= ⟨          ⟩
 T  | T2 = 1 then the matrix would be
       [       ]
        xid  xT
MG  =   xT  xid .

Then detMG = (xid xT )(xid + xT ).

(b)
If G = S3, a long computation gives the irreducible factorization of detMG is
(       ) (              )
( ∑     ) ( ∑            ) (  (               )    (              ) )2
      xσ        sign (σ )x σ   F  xid,x(123),x(321) − F  x(12),x(23),x(31)
 σ∈S3       σ∈S3

where F(a,b,c) = a2 + b2 + c2 ab bc ca; the latter factor is irreducible.

Theorem 22.3.2 (Frobenius determinant)
The polynomial detMG (in |G| variables) factors into a product of irreducible polynomials such that

(i)
The number of polynomials equals the number of conjugacy classes of G, and
(ii)
The multiplicity of each polynomial equals its degree.

You may already be able to guess how the “sum of squares” result is related! (Indeed, look at deg detMG.)

Legend has it that Dedekind observed this behavior first in 1896. He didn’t know how to prove it in general, so he sent it in a letter to Frobenius, who created representation theory to solve the problem.

With all the tools we’ve built, it is now fairly straightforward to prove the result.

Proof. Let V = (V,ρ) = Reg([G]) and let V 1, …, V r be the irreps of G. Let’s consider the map T : [G] [G] which has matrix MG in the usual basis of [G], namely

                ∑
T : T({xg}g∈G ) =   xgρ(g) ∈ Mat (V).
                g∈G

Thus we want to examine detT.

But we know that V = i=1rV i dim V i as before, and so breaking down T over its subspaces we know

       ∏r
detT =    (det(T↾Vi))dimVi .
       i=1

So we only have to show two things: the polynomials detTV i are irreducible, and they are pairwise different for different i.

Let V i = (V i), and pick k = dimV i.

Part VII
Quantum Algorithms

23  Quantum states and measurements

In this chapter we’ll explain how to set up quantum states using linear algebra. This will allow me to talk about quantum circuits in the next chapter, which will set the stage for Shor’s algorithm.

I won’t do very much physics (read: none at all). That is, I’ll only state what the physical reality is in terms of linear algebras, and defer the philosophy of why this is true to your neighborhood “Philosophy of Quantum Mechanics” class (which is a “social science” class at MIT!).

23.1  Bra-ket notation

Physicists have their own notation for vectors: whereas I previously used something like v, e1, and so on, in this chapter you’ll see the infamous bra-ket notation: a vector will be denoted by |∙⟩, where is some variable name: unlike in math or Python, this can include numbers, symbols, Unicode characters, whatever you like. This is called a “ket”. To pay a homage to physicists everywhere, we’ll use this notation for this chapter too.

Abuse of Notation 23.1.1 (For this part, dimH < ). In this part on quantum computation, we’ll use the word “Hilbert space” as defined earlier, but in fact all our Hilbert spaces will be finite-dimensional.

If dimH = n, then its orthonormal basis elements are often denoted

|0⟩ , |1⟩ ,..., |n − 1⟩

(instead of ei) and a generic element of H denoted by

|ψ⟩ , |ϕ ⟩,...

and various other Greek letters.

Now for any |ψ ⟩H, we can consider the canonical dual element in H (since H has an inner form), which we denote by ψ| (a “bra”). For example, if dimH = 2 then we can write

      [α]
|ψ⟩ =  β

in an orthonormal basis, in which case

      [-- -]
⟨ψ | = α  β .

We even can write dot products succinctly in this notation: if |ϕ⟩ = [ γ]

  δ, then the dot product of |ϕ⟩ and |ψ⟩ is given by

       [-- --][γ]   --   --
⟨ψ |ϕ ⟩ = α  β   δ  = αγ + βδ.

So we will use the notation ⟨ψ|ϕ⟩ instead of the more mathematical ⟨|ψ ⟩, |ϕ⟩⟩. In particular, the squared norm of |ψ⟩ is just ⟨ψ |ψ ⟩. Concretely, for dimH = 2 we have ⟨ψ |ψ ⟩ = |α|2 + |β|2.

23.2  The state space

If you think that’s weird, well, it gets worse.

In classical computation, a bit is either 0 or 1. More generally, we can think of a classical space of n possible states 0, …, n 1. Thus in the classical situation, the space of possible states is just a discrete set with n elements.

In quantum computation, a qubit is instead any complex linear combination of 0 and 1. To be precise, consider the normed complex vector space

       ⊕2
H  = ℂ

and denote the orthonormal basis elements by |0⟩ and |1⟩. Then a qubit is a nonzero element |ψ ⟩H, so that it can be written in the form

|ψ⟩ = α |0 ⟩+ β |1⟩

where α and β are not both zero. Typically, we normalize so that |ψ ⟩ has norm 1:

⟨ψ |ψ ⟩ = 1 ⇐ ⇒ |α|2 + |β|2 = 1.

In particular, we can recover the “classical” situation with |0⟩H and |1⟩H, but now we have some “intermediate” states, such as

 1
√--(|0⟩+ |1⟩).
  2

Philosophically, what has happened is that:

Instead of allowing just the states |0⟩ and |1⟩, we allow any complex linear combination of them.

More generally, if dimH = n, then the possible states are nonzero elements

c0 |0⟩ + c1 |1⟩+ ⋅⋅⋅+ cn−1 |n − 1⟩

which we usually normalize so that |c0|2 + |c1|2 + ⋅⋅⋅ + |cn1|2 = 1.

23.3  Observations

Prototypical example for this section: id corresponds to not making a measurement since all its eigenvalues are equal, but any operator with distinct eigenvalues will cause collapse.

If you think that’s weird, well, it gets worse. First, some linear algebra:

Definition 23.3.1. Let V be a finite-dimensional inner product space. For a map T : V V , the following conditions are equivalent:

A map T satisfying these conditions is called Hermitian.

Question 23.3.2. Show that T is normal.

Thus, we know that T is diagonalizable with respect to the inner form, so for a suitable basis we can write it in an orthonormal basis as

    ⌊ λ   0   ...   0  ⌋
    |  0               |
T = || 0   λ1  ...   0  || .
    ⌈  ...   ...  ...   ...  ⌉
      0   0   ...  λn−1

As we’ve said, this is fantastic: not only do we have a basis of eigenvectors, but the eigenvectors are pairwise orthogonal, and so they form an orthonormal basis of V .

Question 23.3.3. Show that all eigenvalues of T are real. (T = T.)

Back to quantum computation. Suppose we have a state |ψ⟩H, where dimH = 2; we haven’t distinguished a particular basis yet, so we just have a nonzero vector. Then the way observations work (and this is physics, so you’ll have to take my word for it) is as follows:

Pick a Hermitian operator T : H H; then observations of T return eigenvalues of T.

To be precise:

Note that in particular, for any nonzero constant c, |ψ⟩ and c|ψ⟩ are indistinguishable, which is why we like to normalize |ψ⟩. But the queerest thing of all is what happens to |ψ ⟩: by measuring it, we actually destroy information. This behavior is called quantum collapse.

In other words,

When we make a measurement, the coefficients from different eigenspaces are destroyed.

Why does this happen? Beats me…physics (and hence real life) is weird. But anyways, an example.

Example 23.3.4 (Quantum measurement of a state |ψ ⟩)
Let H = 2 with orthonormal basis |0 ⟩ and |1⟩ and consider the state

                        [  √-]
      -i-      -2-       i∕  5-
|ψ⟩ = √5-|0⟩+  √5-|1⟩ =  2∕√5  ∈ H.

(a)
Let
    [      ]
      1  0
T =   0 − 1  .

This has eigenvectors |0⟩ = |0⟩T and |1⟩ = |1⟩T , with eigenvalues +1 and 1. So if we measure |ψ⟩ to T, we get +1 with probability 15 and 1 with probability 45. After this measurement, the original state collapses to |0⟩ if we measured +1, and |1⟩ if we measured 1. So we never learn the original probabilities.

(b)
Now consider T = id, and arbitrarily pick two orthonormal eigenvectors |0⟩T , |1⟩T ; thus ψ = c0|0⟩T + c1|1⟩T . Since all eigenvalues of T are +1, our measurement will always be +1 no matter what we do. But there is also no collapsing, because none of the coefficients get destroyed.
(c)
Now consider
    [    ]
     0  7
T =  7  0  .

The two normalized eigenvectors are

          [ ]               [   ]
       -1--1             -1-- 1
|0⟩T = √ 2 1      |1⟩T = √ 2  − 1

with eigenvalues +7 and 7 respectively. In this basis, we have

      2-+-i       − 2-+-i
|ψ⟩ = √10--|0⟩T +  √10-- |1⟩T .

So we get +7 with probability 1
2 and 7 with probability 1
2, and after the measurement, |ψ ⟩ collapses to one of |0⟩T and |1⟩T .

Question 23.3.5. Suppose we measure |ψ⟩ with T and get λ. What happens if we measure with T again?

For H = 2 we can come up with more classes of examples using the so-called Pauli matrices. These are the three Hermitian matrices

    [1   0 ]          [0  1]         [0  − i]
σz =             σx =           σy =         .
      0 − 1            1  0            i  0

These matrices are important because:

Question 23.3.6. Show that these three matrices, plus the identity matrix, form a basis for the set of Hermitian 2 × 2 matrices.

So the Pauli matrices are a natural choice of basis.

Their normalized eigenvectors are

          [1]                [0]
|↑⟩ = |0⟩ = 0      |↓⟩ = |1⟩ =  1

       1 [1]             1 [ 1 ]
|→ ⟩ = √---1      |← ⟩ = √---− 1
        2                 2

          [ ]              [  ]
|⊗⟩ = √1-- 1      |⊙ ⟩ = 1√--  1
        2  i             2  − i

which we call “z-up”, “z-down”, “x-up”, “x-down”, “y-up”, “y-down”. (The eigenvalues are +1 for “up” and 1 for “down”.) So, given a state |ψ ⟩2 we can make a measurement with respect to any of these three bases by using the corresponding Pauli matrix.

In light of this, the previous examples were (a) measuring along σz, (b) measuring along id, and (c) measuring along 7σx.

Notice that if we are given a state |ψ⟩, and are told in advance that it is either |→ ⟩ or |← ⟩ (or any other orthogonal states) then we are in what is more or less a classical situation. Specifically, if we make a measurement along σx, then we find out which state that |ψ ⟩ was in (with 100% certainty), and the state does not undergo any collapse. Thus, orthogonal states are reliably distinguishable.

23.4  Entanglement

Prototypical example for this section: Singlet state: spooky action at a distance.

If you think that’s weird, well, it gets worse.

Qubits don’t just act independently: they can talk to each other by means of a tensor product. Explicitly, consider

       ⊕2    ⊕2
H  = ℂ   ⊗ ℂ

endowed with the norm described in ?? . One should think of this as a qubit A in a space HA along with a second qubit B in a different space HB, which have been allowed to interact in some way, and H = HA HB is the set of possible states of both qubits. Thus

|0⟩  ⊗ |0⟩ ,   |0⟩  ⊗ |1⟩ ,   |1⟩  ⊗ |0⟩ ,   |1⟩  ⊗ |1⟩
  A      B      A      B      A      B      A     B

is an orthonormal basis of H; here |i⟩A is the basis of the first 2 while |i⟩B is the basis of the second 2, so these vectors should be thought of as “unrelated” just as with any tensor product. The pure tensors mean exactly what you want: for example |0⟩A |1⟩B means “0 for qubit A and 1 for qubit B”.

As before, a measurement of a state in H requires a Hermitian map H H. In particular, if we only want to measure the qubit B along MB, we can use the operator

idA ⊗ MB.

The eigenvalues of this operator coincide with the ones for MB, and the eigenspace for λ will be the HA (HB)λ, so when we take the projection the A qubit will be unaffected.

This does what you would hope for pure tensors in H:

Example 23.4.1 (Two non-entangled qubits)
Suppose we have qubit A in the state √i-
  5|0⟩A + √2-
  5|1⟩A and qubit B in the state √1-
 2|0⟩B + √1-
 2|1⟩B. So, the two qubits in tandem are represented by the pure tensor

      (                  )   (                   )
|ψ⟩ =   √i-|0⟩  + √2-|1⟩   ⊗   √1--|0⟩  + √1--|1⟩    .
          5   A     5   A        2   B     2   B

Suppose we measure |ψ⟩ along

M  = idA ⊗ σBz .

The eigenspace decomposition is

(We could have used other bases, like |→  ⟩A |0⟩B and |← ⟩A |0⟩B for the first eigenspace, but it doesn’t matter.) Expanding |ψ⟩ in the four-element basis, we find that we’ll get the first eigenspace with probability

|    |    |   |
|--i-|2   |-2-|2   1-
||√10-|| +  ||√10-||  = 2.

and the second eigenspace with probability 1
2 as well. (Note how the coefficients for A don’t do anything!) After the measurement, we destroy the coefficients of the other eigenspace; thus (after re-normalization) we obtain the collapsed state

(  i         2     )                  (  i         2     )
  √--|0⟩A + √--|1⟩A  ⊗ |0⟩B     or      √--|0⟩A + √---|1⟩A  ⊗  |1⟩B
   5         5                            5         5

again with 50% probability each.

So this model lets us more or less work with the two qubits independently: when we make the measurement, we just make sure to not touch the other qubit (which corresponds to the identity operator).

Exercise 23.4.2. Show that if idA σxB is applied to the |ψ ⟩ in this example, there is no collapse at all. What’s the result of this measurement?

Since the is getting cumbersome to write, we say:

Abuse of Notation 23.4.3. From now on |0⟩A |0⟩B will be abbreviated to just |00⟩, and similarly for |01⟩, |10⟩, |11⟩.

Example 23.4.4 (Simultaneously measuring a general 2-Qubit state)
Consider a normalized state |ψ ⟩ in H = 2 2, say

|ψ⟩ = α |00⟩ + β |01⟩ + γ |10⟩+ δ |11⟩.

We can make a measurement along the diagonal matrix T : H H with

T(|00⟩) = 0 |00⟩, T(|01⟩) = 1 |01⟩, T(|10⟩) = 2 |10⟩, T( |11⟩) = 3 |11⟩.

Thus we get each of the eigenvalues 0, 1, 2, 3 with probability |α|2, |β|2, |γ|2, |δ|2. So if we like we can make “simultaneous” measurements on two qubits in the same way that we make measurements on one qubit.

However, some states behave very weirdly.

Example 23.4.5 (The singlet state)
Consider the state

       -1--      -1--
|Ψ − ⟩ = √ 2 |01⟩ − √ 2 |10⟩

which is called the singlet state. One can see that |Ψ − ⟩ is not a simple tensor, which means that it doesn’t just consist of two qubits side by side: the qubits in HA and HB have become entangled.

Now, what happens if we measure just the qubit A? This corresponds to making the measurement

T = σAz ⊗ idB.

The eigenspace decomposition of T can be described as:

So one of two things will happen:

But now we see that measurement along A has told us what the state of the bit B is completely!

By solely looking at measurements on A, we learn B; this paradox is called spooky action at a distance, or in Einstein’s tongue, spukhafte Fernwirkung. Thus,

In tensor products of Hilbert spaces, states which are not pure tensors correspond to “entangled” states.

What this really means is that the qubits cannot be described independently; the state of the system must be given as a whole. That’s what entangled states mean: the qubits somehow depend on each other.

23.5  A few harder problems to think about

Problem 23A. We measure |Ψ− ⟩ by σxA idB, and hence obtain either +1 or 1. Determine the state of qubit B from this measurement.

Problem 23B (Greenberger-Horne-Zeilinger paradox). Consider the state in (2)3

|Ψ⟩    =  1√--(|0⟩ |0⟩  |0⟩  − |1⟩  |1⟩  |1⟩ ).
   GHZ     2    A    B   C      A   B   C

Find the value of the measurements along each of

 A    B     C    A    B     C    A    B    C     A    B    C
σy ⊗ σy ⊗ σx ,  σy ⊗ σx ⊗ σy ,  σx ⊗ σy ⊗ σy ,  σx ⊗ σx ⊗ σx .

As for the paradox: what happens if you multiply all these measurements together?

24  Quantum circuits

Now that we’ve discussed qubits, we can talk about how to use them in circuits. The key change — and the reason that quantum circuits can do things that classical circuits cannot — is the fact that we are allowing linear combinations of 0 and 1.

24.1  Classical logic gates

In classical logic, we build circuits which take in some bits for input, and output some more bits for input. These circuits are built out of individual logic gates. For example, the AND gate can be pictured as follows.

0     and     0     0     and     0     1      and    0      1     and     1
0                   1                   0                    1

One can also represent the AND gate using the “truth table”:

|------|---------|
|A---B-|A-and--B-|
|0   0 |    0    |
|0   1 |    0    |
|1   0 |    0    |
|1   1 |    1    |
-----------------|

Similarly, we have the OR gate and the NOT gate:

|-------|-------|
|A---B--|A-or B-|   |---|------|
| 0  0  |  0    |   |A--|not A-|
| 0  1  |  1    |   |0  |  1   |
| 1  0  |  1    |   |1  |  0   |
|       |       |   -----------|
|-1--1-----1-----

We also have a so-called COPY gate, which duplicates a bit.

0     copy     0      1     copy     1
               0                     1

Of course, the first theorem you learn about these gates is that:

Theorem 24.1.1 (AND, OR, NOT, COPY are universal)
The set of four gates AND, OR, NOT, COPY is universal in the sense that any boolean function f : {0,1}n →{0,1} can be implemented as a circuit using only these gates.

Proof. Somewhat silly: we essentially write down a circuit that OR’s across all input strings in fpre(1). For example, suppose we have n = 3 and want to simulate the function f(abc) with f(011) = f(110) = 1 and 0 otherwise. Then the corresponding Boolean expression for f is simply

f(abc) = [(not a) and b and c] or [a and b and (not c)].

Clearly, one can do the same for any other f, and implement this logic into a circuit. □

Remark 24.1.2 — Since x and y = not ((not x) or (not y)), it follows that in fact, we can dispense with the AND gate.

24.2  Reversible classical logic

Prototypical example for this section: CNOT gate, Toffoli gate.

For the purposes of quantum mechanics, this is not enough. To carry through the analogy we in fact need gates that are reversible, meaning the gates are bijections from the input space to the output space. In particular, such gates must take the same number of input and output gates.

Example 24.2.1 (Reversible gates)

(a)
None of the gates AND, OR, COPY are reversible for dimension reasons.
(b)
The NOT gate, however, is reversible: it is a bijection {0,1}→{0,1}.

Example 24.2.2 (The CNOT gate)
The controlled-NOT gate, or the CNOT gate, is a reversible 2-bit gate with the following truth table.

------------
| In  |Out  |
|0--0-|0--0-|
|     |     |
|1  0 |1  1 |
|0  1 |0  1 |
-1--1--1--0-|

In other words, this gate XOR’s the first bit to the second bit, while leaving the first bit unchanged. It is depicted as follows.

x    ∙    x
 y        x+ y   mod 2

The first dot is called the “control”, while the is the “negation” operation: the first bit controls whether the second bit gets flipped or not. Thus, a typical application might be as follows.

1   ∙    1
0        1

So, NOT and CNOT are the only nontrivial reversible gates on two bits.

We now need a different definition of universal for our reversible gates.

Definition 24.2.3. A set of reversible gates can simulate a Boolean function f(x1xn), if one can implement a circuit which takes

The gate(s) are universal if they can simulate any Boolean function.

For example, the CNOT gate can simulate the NOT gate, using a single ancilla bit 1, according to the following circuit.

x    ∙    x
 1        not x

Unfortunately, it is not universal.

Proposition 24.2.4 (CNOT ⇏AND)
The CNOT gate cannot simulate the boolean function “x and y”.

Sketch of Proof. One can see that any function simulated using only CNOT gates must be of the form

a1x1 + a2x2 + ⋅⋅⋅+ anxn (mod  2)

because CNOT is the map (x,y)↦→(x,x+y). Thus, even with ancilla bits, we can only create functions of the form ax + by + c (mod 2) for fixed a, b, c. The AND gate is not of this form. □

So, we need at least a three-qubit gate. The most commonly used one is:

Definition 24.2.5. The three-bit Toffoli gate, also called the CCNOT gate, is given by

x    ∙    x
 y   ∙    y
 z        z + xy  (mod  2)

So the Toffoli has two controls, and toggles the last bit if and only if both of the control bits are 1.

This replacement is sufficient.

Theorem 24.2.6 (Toffoli gate is universal)
The Toffoli gate is universal.

Proof. We will show it can reversibly simulate AND, NOT, hence OR, which we know is enough to show universality. (We don’t need COPY because of reversibility.)

For the AND gate, we draw the circuit

x    ∙    x
 y   ∙    y
 0        x and y

with one ancilla bit, and no garbage bits.

For the NOT gate, we use two ancilla 1 bits and one garbage bit:

1   ∙    1
z   ∙    z
1        not z

This completes the proof. □

Hence, in theory we can create any classical circuit we desire using the Toffoli gate alone. Of course, this could require exponentially many gates for even the simplest of functions. Fortunately, this is NO BIG DEAL because I’m a math major, and having 2n gates is a problem best left for the CS majors.

24.3  Quantum logic gates

In quantum mechanics, since we can have linear combinations of basis elements, our logic gates will instead consist of linear maps. Moreover, in quantum computation, gates are always reversible, which was why we took the time in the previous section to show that we can still simulate any function when restricted to reversible gates (e.g. using the Toffoli gate).

First, some linear algebra:

Definition 24.3.1. Let V be a finite dimensional inner product space. Then for a map U : V V , the following are equivalent:

The map U is called unitary if it satisfies these equivalent conditions.

Then

Quantum logic gates are unitary matrices.

In particular, unlike the classical situation, quantum gates are always reversible (and hence they always take the same number of input and output bits).

For example, consider the CNOT gate. Its quantum analog should be a unitary map UCNOT : H H, where H = 2 2, given on basis elements by

U     (|00⟩) = |00⟩,  U      (|01⟩) = |01⟩
 CNOT                  CNOT

UCNOT  (|10⟩) = |11⟩,  UCNOT (|11⟩) = |10⟩.

So pictorially, the quantum CNOT gate is given by

|0⟩   ∙   |0⟩     |0 ⟩  ∙    |0⟩     |1⟩   ∙   |1⟩     |1⟩  ∙    |1⟩
|0⟩       |0⟩     |1 ⟩       |1⟩     |0⟩       |1⟩     |1⟩       |0⟩

OK, so what? The whole point of quantum mechanics is that we allow linear qubits to be in linear combinations of |0⟩ and |1⟩, too, and this will produce interesting results. For example, let’s take |← ⟩ = √1-
  2(|0⟩|1⟩) and plug it into the top, with |1⟩ on the bottom, and see what happens:

                          (                )
U     (|← ⟩⊗  |1⟩) = U       √1-(|01⟩− |11⟩)  = √1--(|01⟩ − |10⟩) = |Ψ  ⟩
 CNOT                CNOT     2                  2                 −

which is the fully entangled singlet state! Picture:

|← ⟩   ∙   |Ψ − ⟩
 |1⟩

Thus, when we input mixed states into our quantum gates, the outputs are often entangled states, even when the original inputs are not entangled.

Example 24.3.2 (More examples of quantum gates)

(a)
Every reversible classical gate that we encountered before has a quantum analog obtained in the same way as CNOT: by specifying the values on basis elements. For example, there is a quantum Tofolli gate which for example sends
|1⟩   ∙   |1⟩
|1⟩   ∙   |1⟩
|0⟩       |1⟩
(b)
The Hadamard gate on one qubit is a rotation given by
[-1-  -1- ]
 √2   √2-  .
 √12- − √12-

Thus, it sends |0⟩ to |→ ⟩ and |1⟩ to |← ⟩. Note that the Hadamard gate is its own inverse. It is depicted by an “H” box.

|0⟩   H    |→ ⟩
(c)
More generally, if U is a 2 × 2 unitary matrix (i.e. a map 2 2) then there is U-rotation gate similar to the previous one, which applies U to the input.
|ψ ⟩   U    U |ψ⟩

For example, the classical NOT gate is represented by U = σx.

(d)
A controlled U-rotation gate generalizes the CNOT gate. Let U : 2 2 be a rotation gate, and let H = 2 2 be a 2-qubit space. Then the controlled U gate has the following circuit diagrams.
 |0⟩    ∙    |0⟩          |1⟩    ∙    |1⟩
|ψ⟩   U     |ψ ⟩        |ψ⟩   U    U  |ψ ⟩

Thus, U is applied when the controlling bit is 1, and CNOT is the special case U = σx. As before, we get interesting behavior if the control is mixed.

And now, some more counterintuitive quantum behavior. Suppose we try to use CNOT as a copy, with truth table.

|-----|-----|
|-In--|Out--|
|0  0 |0  0 |
|1  0 |1  1 |
|     |     |
|0  1 |0  1 |
-1--1--1--0-|

The point of this gate is to be used with a garbage 0 at the bottom to try and simulate a “copy” operation. So indeed, one can check that

|0⟩          |0⟩          |1⟩         |1⟩
|0⟩    U     |0⟩          |0⟩    U    |1⟩

Thus we can copy |0⟩ and |1⟩. But as we’ve already seen if we input |← ⟩|0⟩ into U, we end up with the entangled state |Ψ  ⟩
   − which is decisively not the |← ⟩|← ⟩ we wanted. And in fact, the so-called no-cloning theorem implies that it’s impossible to duplicate an arbitrary |ψ ⟩; the best we can do is copy specific orthogonal states as in the classical case. See also ?? .

24.4  Deutsch-Jozsa algorithm

The Deutsch-Jozsa algorithm is the first example of a nontrivial quantum algorithm which cannot be performed classically: it is a “proof of concept” that would later inspire Grover’s search algorithm and Shor’s factoring algorithm.

The problem is as follows: we’re given a function f : {0,1}n →{0,1}, and promised that the function f is either

The function f is given in the form of a reversible black box Uf which is the control of a NOT gate, so it can be represented as the circuit diagram

|x1x2...xn⟩  ∕n         |x1x2...xn ⟩
                 Uf
         |y⟩             |y + f(x)  mod 2⟩

i.e. if f(x1,,xn) = 0 then the gate does nothing, otherwise the gate flips the y bit at the bottom. The slash with the n indicates that the top of the input really consists of n qubits, not just the one qubit drawn, and so the black box Uf is a map on n + 1 qubits.

The problem is to determine, with as few calls to the black box Uf as possible, whether f is balanced or constant.

Question 24.4.1. Classically, show that in the worst case we may need up to 2n1 + 1 calls to the function f to answer the question.

So with only classical tools, it would take O(2n) queries to determine whether f is balanced or constant. However,

Theorem 24.4.2 (Deutsch-Jozsa)
The Deutsch-Jozsa problem can be determined in a quantum circuit with only a single call to the black box.

Proof. For concreteness, we do the case n = 1 explicitly; the general case is contained in ?? . We claim that the necessary circuit is

|0⟩    H           H
            Uf
|1⟩    H

Here the H’s are Hadamard gates, and the meter at the end of the rightmost wire indicates that we make a measurement along the usual |0⟩, |1⟩ basis. This is not a typo! Even though classically the top wire is just a repeat of the input information, we are about to see that it’s the top we want to measure.

Note that after the two Hadamard operations, the state we get is

|01⟩ H2 ↦− (  1           )
  √--(|0⟩+ |1⟩)
    2(  1          )
  √--(|0⟩− |1⟩)
    2
= 1-
2(|0⟩( |0⟩|1⟩ ) + |1⟩( |0⟩|1⟩)).

So after applying Uf, we obtain

  (                                                             )
1- |0⟩⊗ ( |0 + f(0)⟩−  |1 + f(0)⟩) +  |1⟩ ⊗ (|0+ f (1)⟩ − |1 + f(1)⟩)
2

where the modulo 2 has been left implicit. Now, observe that the effect of going from |0⟩|1⟩ to |0 + f(x)⟩|1+ f(x)⟩ is merely to either keep the state the same (if f(x) = 0) or to negate it (if f(x) = 1). So we can simplify and factor to get

 (                        )
1-     f(0)          f(1)
2 (− 1)   |0⟩+ (− 1)   |1⟩ ⊗  (|0⟩ − |1⟩).

Thus, the picture so far is:

                     (                        )
|0⟩   H            √12- (− 1)f(0) |0⟩+  (− 1)f(1) |1⟩
            Uf     -1-
|1⟩   H            √2(|0⟩−  |1⟩)

In particular, the resulting state is not entangled, and we can simply discard the last qubit (!). Now observe:

So simply doing a measurement along σx will give us the answer. Equivalently, perform another H gate (so that H|→ ⟩ = |0⟩, H|← ⟩ = |1⟩) and measuring along σz in the usual |0⟩, |1⟩ basis. Thus for n = 1 we only need a single call to the oracle. □

24.5  A few harder problems to think about

Problem 24A (Fredkin gate). The Fredkin gate (also called the controlled swap, or CSWAP gate) is the three-bit gate with the following truth table:

|--------|-------|
|--In----|-Out---|
|0  0  0 |0  0  0|
|0  0  1 |0  0  1|
|0  1  0 |0  1  0|
|0  1  1 |0  1  1|
|        |       |
|1  0  0 |1  0  0|
|1  0  1 |1  1  0|
|1  1  0 |1  0  1|
-1--1--1--1--1--1-

Thus the gate swaps the last two input bits whenever the first bit is 1. Show that this gate is also reversible and universal.

Problem 24B (Baby no-cloning theorem). Show that there is no unitary map U on two qubits which sends U(|ψ⟩|0⟩) = |ψ ⟩|ψ ⟩ for any qubit |ψ⟩, i.e. the following circuit diagram is impossible.

|ψ⟩         |ψ⟩
       U
 |0⟩         |ψ⟩

Problem 24C (Deutsch-Jozsa). Given the black box Uf described in the Deutsch-Jozsa algorithm, consider the following circuit.

|0...0⟩  ∕n  H ⊗n          H ⊗n
                     Uf
    |1⟩       H

That is, take n copies of |0⟩, apply the Hadamard rotation to all of them, apply Uf, reverse the Hadamard to all n input bits (again discarding the last bit), then measure all n bits in the |0⟩/|1⟩ basis (as in ?? ).

Show that the probability of measuring |0...0⟩ is 1 if f is constant and 0 if f is balanced.

Problem 24D (Barenco et al, 1995; arXiv:quant-ph/9503016v1). Let

    [    ]             [       ]
     1  0           1    1  − i
P =  0  i      Q = √2-- − i  1

Verify that the quantum Toffoli gate can be implemented using just controlled rotations via the circuit

|x1⟩       ∙    ∙  ∙        ∙

|x2⟩   ∙       P        ∙

|x3⟩   Q   Q           Q †

This was a big surprise to researchers when discovered, because classical reversible logic requires three-bit gates (e.g. Toffoli, Fredkind).

25  Shor’s algorithm

OK, now for Shor’s Algorithm: how to factor M = pq in O(        )
 (logM )2 time.

25.1  The classical (inverse) Fourier transform

The “crux move” in Shor’s algorithm is the so-called quantum Fourier transform. The Fourier transform is used to extract periodicity in data, and it turns out the quantum analogue is a lot faster than the classical one.

Let me throw the definition at you first. Let N be a positive integer, and let ωN = exp(2πi)
 N.

Definition 25.1.1. Given a tuple of complex numbers

(x0,x1,...,xN −1)

its discrete inverse Fourier transform is the sequence (y0,y1,,yN1) defined by

        N−1
     -1 ∑    jk
yk = N     ω N xj.
        j=0

Equivalently, one is applying the matrix

   ⌊1    1       1     ...     1   ⌋⌊      ⌋   ⌊     ⌋
   |1   ω       ω2     ...   ωN −1 |   x0         y0
 1 ||      N2      N4          2N(N−1)|||  x1  |   |  y1 |
-- ||1   ω N     ωN     ...  ωN     ||||   .  || = ||  .  || .
N  |...    ...       ...     ...     ...   |⌈   ..  ⌉   ⌈  ..  ⌉
   ⌈     N−1   2(N− 1)        (N−1)2⌉  xN −1     yN −1
    1  ωN     ωN       ...  ωN

The reason this operation is important is because it lets us detect if the xi are periodic:

Example 25.1.2 (Example of discrete inverse Fourier transform)
Let N = 6, ω = ω6 = exp(2π6i-) and suppose (x0,x1,x2,x3,x4,x5) = (0,1,0,1,0,1) (hence xi is periodic modulo 2). Thus,

y0 = 1
6( 0    0    0)
 ω + ω  + ω = 12
y1 = 16(            )
 ω1 + ω3 + ω5 = 0
y2 = 1
6(ω2 + ω6 + ω10) = 0
y3 = 1
6( 3    9    15)
 ω + ω  + ω = 12
y4 = 16(             )
 ω4 + ω12 + ω20 = 0
y5 = 1
6(ω5 + ω15 + ω25) = 0.

Thus, in the inverse transformation the “amplitudes” are all concentrated at multiples of 3; thus this reveals the periodicity of the original sequence by N-
3 = 2.

More generally, given a sequence of 1’s appearing with period r, the amplitudes will peak at inputs which are divisible by ---N---
gcd(N,r).

Remark 25.1.3 — The fact that this operation is called the “inverse” Fourier transform is mostly a historical accident (as my understanding goes). Confusingly, the corresponding quantum operation is the (not-inverted) Fourier transform.

If we apply the definition as written, computing the transform takes O(N2) time. It turns out that by an algorithm called the fast Fourier transform (whose details we won’t discuss), one can reduce this to O(N log N) time. However, for Shor’s algorithm this is also insufficient; we need something like O(      2)
 (log N ) instead. This is where the quantum Fourier transform comes in.

25.2  The quantum Fourier transform

Note that to compute a Fourier transform, we need to multiply an N × N matrix with an N-vector, so this takes O(N2) multiplications. However, we are about to show that with a quantum computer, one can do this using O((log N)2) quantum gates when N = 2n, on a system with n qubits.

First, some more notation:

Abuse of Notation 25.2.1. In what follows, |x⟩ will refer to |xn ⟩|xn −1⟩⋅⋅⋅|x1⟩ where x = xnxn1x1 in binary. For example, if n = 3 then |6⟩ really means |1⟩|1⟩|0⟩.

Observe that the n-qubit space now has an orthonormal basis |0⟩, |1⟩, …, |N − 1⟩

Definition 25.2.2. Consider an n-qubit state

      N−1
      ∑
|ψ⟩ =    xk |k⟩.
      k=0

The quantum Fourier transform is defined by

                     (        )
              1  N∑ −1  N∑−1  jk
UQFT (|ψ⟩) = √---         ω N   |j⟩ .
               N j=0   k=0

In other words, using the basis |0⟩, …, |N − 1⟩, UQFT is given by the matrix

            ⌊                               ⌋
            | 1   1       12    ...    N1−1 |
            | 1   ωN     ω N    ...   ωN    |
UQFT =  √1--|| 1   ω2N     ω4N    ...  ω2N(N− 1)||
          N || ..    ..       ..     ..      ..   ||
            ⌈ .    .       .      .     .  2⌉
              1 ωNN− 1 ω2N(N −1) ...  ω(NN− 1)

This is the exactly the same definition as before, except we have a √ --
  N factor added so that UQFT is unitary. But the trick is that in the quantum setup, the matrix can be rewritten:

Proposition 25.2.3 (Tensor representation)
Let |x⟩ = |xnxn−1 ...x1⟩. Then

UQFT(|xnxn −1...x1⟩) = √1--
 N(|0⟩+ exp(2πi⋅0.x1) |1⟩)
(|0⟩+ exp (2 πi⋅0.x2x1) |1⟩)
(|0⟩+ exp (2 πi⋅0.xn...x1) |1⟩)

Proof. Direct (and quite annoying) computation. In short, expand everything. □

So by using mixed states, we can deal with the quantum Fourier transform using this “multiplication by tensor product” trick that isn’t possible classically.

Now, without further ado, here’s the circuit. Define the rotation matrices

     [              ]
      1       0
Rk =  0  exp(2πi∕2k) .

Then, for n = 3 the circuit is given by by using controlled Rk’s as follows:

|x3⟩    H    R2         R3                |y1⟩

|x2⟩          ∙    H          R2          |y2⟩

|x1⟩                     ∙     ∙    H     |y3⟩

Exercise 25.2.4. Show that in this circuit, the image of of |x3x2x1⟩ (for binary xi) is

(                    )  (                       )  (                        )
 |0⟩+ exp(2πi⋅0.x1) |1⟩ ⊗  |0⟩+ exp(2πi ⋅0.x2x1) |1⟩ ⊗  |0⟩ +exp(2πi⋅0.x3x2x1) |1⟩

as claimed.

For general n, we can write this as inductively as

  |xn⟩                Rn            ⋅⋅⋅         ⋅⋅⋅              |y1⟩

|xn−1⟩                      Rn−1    ⋅⋅⋅         ⋅⋅⋅              |y2⟩
    .                                                              .
    ..     QFTn  −1                                                 ..
   |xi⟩                              ⋅⋅⋅   Ri    ⋅⋅⋅              |yn−i+1⟩

    ...                                                              ...

   |x2⟩                              ⋅⋅⋅         ⋅⋅⋅   R2         |yn−1⟩
                      ∙      ∙      ⋅⋅⋅    ∙    ⋅⋅⋅   ∙
   |x1⟩                                                      H    |yn⟩

Question 25.2.5. Convince yourself that when n = 3 the two circuits displayed are equivalent.

Thus, the quantum Fourier transform is achievable with O(n2) gates, which is enormously better than the O(N log N) operations achieved by the classical fast Fourier transform (where N = 2n).

25.3  Shor’s algorithm

The quantum Fourier transform is the key piece of Shor’s algorithm. Now that we have it, we can solve the factoring problem.

Let p,q > 3 be odd primes, and assume pq. The main idea is to turn factoring an integer M = pq into a problem about finding the order of x (mod M); the latter is a “periodicity” problem that the quantum Fourier transform will let us solve. Specifically, say that an x (mod M) is good if

(i)
gcd(x,M) = 1,
(ii)
The order r of x (mod M) is even, and
(iii)
Factoring 0 (xr∕2 1)(xr∕2 + 1) (mod M), neither of the two factors is 0 (mod M). Thus one of them is divisible by p, and the other is divisible by q.

Exercise 25.3.1 (For contest number theory practice). Show that for M = pq at least half of the residues in (∕M)× are good.

So if we can find the order of an arbitrary x (∕M)×, then we just keep picking x until we pick a good one (this happens more than half the time); once we do, we compute gcd(xr∕2 1,M) using the Euclidean algorithm to extract one of the prime factors of M, and we’re home free.

Now how do we do this? The idea is not so difficult: first we generate a sequence which is periodic modulo r.

Example 25.3.2 (Factoring 77: generating the periodic state)
Let’s say we’re trying to factor M = 77, and we randomly select x = 2, and want to find its order r. Let n = 13 and N = 213, and start by initializing the state

       1  N∑− 1
|ψ⟩ = √---    |k⟩.
        N k=0

Now, build a circuit Ux (depending on x = 2!) which takes |k⟩|0⟩ to |k⟩           ⟩
 |2k mod M. Applying this to |ψ⟩|0⟩ gives

             1  N∑−1                 ⟩
U(|ψ⟩ |0⟩) = √---    |k⟩⊗  |2k  mod  M   .
              N  k=0

Now suppose we measure the second qubit, and get a state of |128⟩. That tells us that the collapsed state now, up to scaling, is

( |7⟩+  |7 + r⟩+ |7 + 2r⟩+ ...)⊗ |128⟩.

The bottleneck is actually the circuit Ux; one can compute xk (mod M) by using repeated squaring, but it’s still the clumsy part of the whole operation.

In general, the operation is:

Suppose we apply the quantum Fourier transform to the left qubit |ϕ⟩ now: since the left bit is periodic modulo r, we expect the transform will tell us what r is. Unfortunately, this doesn’t quite work out, since N is a power of two, but we don’t expect r to be.

Nevertheless, consider a state

|ϕ⟩ = |k0⟩+  |k0 + r⟩+ ...

so for example previously we had k0 = 7 if we measured 128 on x = 2. Applying the quantum Fourier transform, we see that the coefficient of |j⟩ in the transformed image is equal to

     (                           )
ωk0j⋅ ω0 +  ωjr+ ω2jr+ ω3jr + ...
 N     N     N    N      N

As this is a sum of roots of unity, we realize we have destructive interference unless ωNjr = 1 (since N is large). In other words, we approximately have

              ∑
UQFT (|ϕ⟩) ≈       |j⟩
             0≤j<N
            jr∕N∈ℤ

up to scaling as usual. The bottom line is that

If we measure UQFT|ϕ⟩ we obtain a |j⟩ such that jr
N is close to an s .

And thus given sufficient luck we can use continued fractions to extract the value of r.

Example 25.3.3 (Finishing the factoring of M = 77)
As before, we made an observation to the second qubit, and thus the first qubit collapses to the state |ϕ⟩ = |7⟩ + |7+  r⟩ + . Now we make a measurement and obtain j = 4642, which means that for some integer s we have

464123r-≈ s.
 2

Now, we analyze the continued fraction of 4642-
213; we find the first few convergents are

0, 1, 1, 4, 13, 17, 1152-, ...
     2  7  23  30  2033

So 1370 is a very good approximation, hence we deduce s = 17 and r = 30 as candidates. And indeed, one can check that r = 30 is the desired order.

This won’t work all the time (for example, we could get unlucky and measure j = 0, i.e. s = 0, which would tell us no information at all).

But one can show that we succeed any time that

gcd(s,r) = 1.

This happens at least l1ogr- of the time, and since r < M this means that given sufficiently many trials, we will eventually extract the correct order r. This is Shor’s algorithm.

Part VIII
Calculus 101

26  Limits and series

Now that we have developed the theory of metric (and topological) spaces well, we give a three-chapter sequence which briskly covers the theory of single-variable calculus.

Much of the work has secretly already been done, For example, if xn and yn are real sequences with limnxn = x and limnyn = y, then in fact limn(xn + yn) = x + y or limn(xnyn) = xy, because we showed in ??  that arithmetic was continuous. We will also see that completeness plays a crucial role.

26.1  Completeness and inf/sup

Prototypical example for this section: sup[0,1] = sup(0,1) = 1.

As is a metric space, we may discuss continuity and convergence. There are two important facts about which will make most of the following sections tick.

The first fact you have already seen before:

Theorem 26.1.1 (is complete)
As a metric space, is complete: sequences converge if and only if they are Cauchy.

The second one we have not seen before — it is the existence of inf and sup. Your intuition should be:

sup is max adjusted slightly for infinite sets. (And inf is adjusted min.)

Why the “adjustment”?

Example 26.1.2 (Why is max not good enough?)
Let’s say we have the open interval S = (0,1). The elements can get arbitrarily close to 1, so we would like to think “1 is the max of S”; except the issue is that 1∕∈S. In general, infinite sets don’t necessarily have a maximum, and we have to talk about bounds instead.

So we will define supS in such a way that supS = 1. The definition is that “1 is the smallest number which is at least every element of S”.

To write it out:

Definition 26.1.3. If S is a set of real numbers:

Theorem 26.1.4 (has inf’s and sup’s)
Let S be a nonempty set of real numbers.

Definition 26.1.5. For convenience, if S has not bounded above, we write supS = +. Similarly, if S has not bounded below, we write inf S = −∞.

Example 26.1.6 (Supremums)
Since the examples for infimums are basically the same, we stick with supremums for now.

(a)
If S = {1,2,3,...} then S is not bounded above, so we have supS = +.
(b)
If S = {...,− 2,− 1} denotes the set of negative integers, then supS = 1.
(c)
Let S = [0,1] be a closed interval. Then supS = 1.
(d)
Let S = (0,1) be an open interval. Then supS = 1 as well, even though 1 itself is not an element of S.
(e)
Let S = (0,1) denote the set of rational numbers between 0 and 1. Then supS = 1 still.
(f)
If S is a finite nonempty set, then supS = maxS.

Definition 26.1.7 (Porting definitions to sequences). If a1, …is a sequence we will often write

supnan := sup{an | n ∈ ℕ }
inf nan : = inf {an | n ∈ ℕ}

for the supremum and infimum of the set of elements of the sequence. We also use the words “bounded above/below” for sequences in the same way.

Example 26.1.8 (Infimum of a sequence)
The sequence an = 1
n has infimum inf an = 0.

26.2  Proofs of the two key completeness properties of

Careful readers will note that we have not actually proven either ??  or ?? . We will do so here.

First, we show that the ability to take infimums and supremums lets you prove completeness of .

Proof that ??  implies ?? . Let a1, a2, …be a Cauchy sequence. By discarding finitely many leading terms, we may as well assume that |ai aj| ≤ 100 for all i and j. In particular, the sequence is now bounded; it lies between [a1 100,a1 +100] for example.

We want to show this sequence converges, so we have to first describe what the limit is. We know that to do this we are really going to have to use the fact that we live in . (For example we know in the limit of 1, 1.4, 1.41, 1.414, …is nonexistent.)

We propose the following: let

S =  {x ∈ ℝ | an ≥ x for infinitely many n }.

We claim that the sequence converges to M = supS.

Exercise 26.2.1. Show that this supremum makes sense by proving that a1 100 S (so S is nonempty) while all elements of S are at most a1 + 100 (so S is bounded above). Thus we are allowed to actually take the supremum.

You can think of this set S with the following picture. We have a Cauchy sequence drawn in the real line which we think converges, which we can visualize as a bunch of dots on the real line, with some order on them. We wish to cut the line with a knife such that only finitely many dots are do to the left of the knife. (For example, placing the knife all the way to the left always works.) The set S represents the places where we could put the knife, and M is “as far right” as we could go. Because of the way supremums work, M might not itself be a valid knife location, but certainly anything to its left is.

Let 𝜀 > 0 be given; we want to show eventually all terms are within 𝜀 of M. Because the sequence is Cauchy, there is an N such that eventually |am − an| < 12𝜀 for m n N.

Now suppose we fix n and vary m. By the definition of M, it should be possible to pick the index m such that am M 1
2𝜀 (there are infinitely many to choose from since M 1
2𝜀 is a valid knife location, and we only need m n). In that case we have

                                  1    1
|an − M | ≤ |an − am|+ |am − M | <-𝜀 + -𝜀 = 𝜀
                                  2    2

by the triangle inequality. This completes the proof. □

Therefore it is enough to prove the latter ?? . To do this though, we would need to actually give a rigorous definition of the real numbers , since we have not done so yet!

One approach that makes this easy to use the so-called Dedekind cut construction. Suppose we take the rational numbers . Then one defines a real number to be a “cut” AB of the set of rational numbers: a pair of subsets of such that

This can again be visualized by taking what you think of as the real line, and slicing at some real number. The subset gets cut into two halves A and B. If the knife happens to land exactly at a rational number, by convention we consider that number to be in the right half (which explains the last fourth condition that supA∕∈A).

With this definition ??  is easy: to take the supremum of a set of real numbers, we take the union of all the left halves. The hard part is then figuring out how to define +, , ×, ÷ and so on with this rather awkward construction. If you want to read more about this construction in detail, my favorite reference is [?], in which all of this is done carefully in Chapter 1.

26.3  Monotonic sequences

Here is a great exercise.

Exercise 26.3.1 (Mandatory). Prove that if a1 a2 ⋅⋅⋅0 then the limit

 lim  a
n→ ∞  n

exists. Hint: the idea in the proof of the previous section helps; you can also try to use completeness of . Second hint: if you are really stuck, wait until after ?? , at which point you can use essentially copy its proof.

The proof here readily adapts by shifting.

Definition 26.3.2. A sequence an is monotonic if either a1 a2 or a1 a2 .

Theorem 26.3.3 (Monotonic bounded sequences converge)
Let a1, a2, …be a monotonic bounded sequence. Then limn→∞an exists.

Example 26.3.4 (Silly example of monotonicity)
Consider the sequence defined by

a1 = 1.2
a2 = 1.24
a3 = 1.248
a4 = 1.24816
a5 = 1.2481632
.
..

and so on, where in general we stuck on the decimal representation of the next power of 2. This will converge to some real number, although of course this number is quite unnatural and there is probably no good description for it.

In general, “infinite decimals” can now be defined as the limit of the truncated finite ones.

Example 26.3.5 (0.9999⋅⋅⋅ = 1)
In particular, I can finally make precise the notion you argued about in elementary school that

0.9999⋅⋅⋅ = 1.

We simply define a repeating decimal to be the limit of the sequence 0.9, 0.99, 0.999. And it is obvious that the limit of this sequence is 1.

Some of you might be a little surprised since it seems like we really should have 0.9999 = 9 101 + 9 102 + — the limit of “partial sums”. Don’t worry, we’re about to define those in just a moment.

Here is one other great use of monotonic sequences.

Definition 26.3.6. Let a1, a2, …be a sequence (not necessarily monotonic) which is bounded below. We define

limn→su∞p an := Nli→m∞ snu≥pN an = Nli→m∞ sup {aN,aN+1, ...}.

This is called the limit supremum of (an). We set limsupn→∞an to be +if an is not bounded above.

If an is bounded above, the limit infimum liminf n→∞an is defined similarly. In particular, liminf n→∞an = −∞ if an is not bounded below.

Exercise 26.3.7. Show that these definitions make sense, by checking that the supremums are non-increasing, and bounded below.

We can think of limsupnan as “supremum, but allowing finitely many terms to be discarded”.

26.4  Infinite series

Prototypical example for this section: k1   1
k(k+1) = limn→∞(      1 )
  1− n+1- = 1.

We will actually begin by working with infinite series, since in the previous chapters we defined limits of sequences, and so this is actually the next closest thing to work with.1

This will give you a rigorous way to think about statements like

∑∞  1    π2
    -2-= ---
n=1 n     6

and help answer questions like “how can you add rational numbers and get an irrational one?”.

Definition 26.4.1. Consider a sequence a1, … of real numbers. The series kak converges to a limit L if the sequence of “partial sums”

s1 = a1
s2 = a1 + a2
s3 = a1 + a2 + a3
...
sn = a1 + ⋅⋅⋅ + an

converges to the limit L. Otherwise it diverges.

Abuse of Notation 26.4.2 (Writing divergence as +). It is customary, if all the ak are nonnegative, to write kak = to denote that the series diverges.

You will notice that by using the definition of sequences, we have masterfully sidestepped the issue of “adding infinitely many numbers” which would otherwise cause all sorts of problems.

An “infinite sum” is actually the limit of its partial sums. There is no infinite addition involved.

That’s why it’s for example okay to have n1n12 =  2
π6 be irrational; we have already seen many times that sequences of rational numbers can converge to irrational numbers. It also means we can gladly ignore all the irritating posts by middle schoolers about 1 + 2 + 3 + ⋅⋅⋅ = 1-
12; the partial sums explode to +, end of story, and if you want to assign a value to that sum it had better be a definition.

Example 26.4.3 (The classical telescoping series)
We can now prove the classic telescoping series

 ∞
∑   ---1----
    k(k + 1)
k=1

in a way that doesn’t just hand-wave the ending. Note that the kth partial sum is

k=1n----1---
k(k + 1) = -1--
1 ⋅2 + -1--
2⋅3 + ⋅⋅⋅ + ---1----
n(n+ 1 )
= (      )
  1-− 1-
  1   2 + ⋅⋅⋅ + (          )
  1-− --1--
  n   n + 1
= 1 --1--
n+ 1.

The limit of this partial sum as n →∞ is 1.

Example 26.4.4 (Harmonic series diverges)
We can also make sense of the statement that k=11
k = (i.e. it diverges). We may bound the 2nth partial sums from below:

k=12n 1-
k = 1-
1 + 1-
2 + ⋅⋅⋅ + 1--
2n
1
--
1 + 1
--
2 + (1    1)
 --+  --
 4    4 + ( 1   1   1   1)
  --+ --+ --+ --
  8   8   8   8
+ ⋅⋅⋅ + (             )
 -1-        1--
 2n + ⋅⋅⋅+  2n
◟------◝◜------◞2n1 terms
= 1 + 1-
2 + 1-
2 + ⋅⋅⋅ + 1-
2 = 1 + n-−-1
  2.

A sequence satisfying s2n 1 + 1
2(n 1) will never converge to a finite number!

I had better also mention that for nonnegative sums, convergence is just the same as having “finite sum” in the following sense.

Proposition 26.4.5 (Partial sums of nonnegatives bounded implies convergent)
Let kak be a series of nonnegative real numbers. Then kak converges to some limit if and only if there is a constant M such that

a1 + ⋅⋅⋅+ an < M

for every positive integer n.

Proof. This is actually just ??  in disguise, but since we left the proof as an exercise back then, we’ll write it out this time.

Obviously if no such M exists then convergence will not happen, since this means the sequence sn of partial sums is unbounded.

Conversely, if such M exists then we have s1 s2 ⋅⋅⋅ < M. Then we contend the sequence sn converges to L : = supnsn < . (If you read the proof that completeness implies Cauchy, the picture is nearly the same here, but simpler.)

Indeed, this means for any 𝜀 there are infinitely many terms of the sequence exceeding L 𝜀; but since the sequence is monotonic, once sn L 𝜀 then snL 𝜀 for all n′≥ n. This implies convergence. □

Abuse of Notation 26.4.6 (Writing < ). For this reason, if ak are nonnegative real numbers, it is customary to write

∑
   ak < ∞
 k

as a shorthand for “ kak converges to a finite limit”, (or perhaps shorthand for “ kak is bounded” — as we have just proved these are equivalent). We will use this notation too.

26.5  Series addition is not commutative: a horror story

One unfortunate property of the above definition is that it actually depends on the order of the elements. In fact, it turns out that there is an explicit way to describe when rearrangement is okay.

Definition 26.5.1. A series kak of real numbers is said to converge absolutely if

∑  |a | < ∞
     k
 k

i.e. the series of absolute values converges to some limit. If the series converges, but not absolutely, we say it converges conditionally.

Proposition 26.5.2 (Absolute convergence =⇒ convergence)
If a series kak of real numbers converges absolutely, then it converges in the usual sense.

Exercise 26.5.3 (Great exercise). Prove this by using the Cauchy criteria: show that if the partial sums of k|ak| are Cauchy, then so are the partial sums of kak.

Then, rearrangement works great.

Theorem 26.5.4 (Permutation of terms okay for absolute convergence)
Consider a series kak which is absolutely convergent and has limit L. Then any permutation of the terms will also converge to L.

Proof. Suppose kak converges to L, and bn is a rearrangement. Let 𝜀 > 0. We will show that the partial sums of bn are eventually within 𝜀 of L.

The hypothesis means that there is a large N in terms of 𝜀 such that

|| N        ||               n
||∑   a − L ||< 1-𝜀  and    ∑   |a | < 1𝜀
|     k    |  2                 k    2
 k=1                    k=N+1

for every n N (the former from vanilla convergence of ak and the latter from the fact that ak converges absolutely, hence its partial sums are Cauchy).

Now suppose M is large enough that a1, …, aN are contained within the terms {b1,,bM}. Then

b1 + ⋅⋅⋅ + bM = (a1 + ⋅⋅⋅ + aN)
+ ai1-+-ai2-+ ⋅⋅⋅+-aiM-−N
◟        ◝◜        ◞M N terms with indices N

The terms in the first line sum up to within 1
2𝜀 of L, and the terms in the second line have sum at most 12𝜀 in absolute value, so the total b1 + ⋅⋅⋅ + bM is within 12𝜀 + 12𝜀 = 𝜀 of L. □

In particular, when you have nonnegative terms, the world is great:

Nonnegative series can be rearranged at will.

And the good news is that actually, in practice, most of your sums will be nonnegative.

The converse is not true, and in fact, it is almost the worst possible converse you can imagine.

Theorem 26.5.5 (Permutation of terms meaningless for conditional convergence)
Consider a series kak which converges conditionally to some real number. Then, there exists a permutation of the series which converges conditionally to 1337.

(Or any constant. You can also get it to diverge, too.)

So, permutation is as bad as possible for conditionally convergent series, and hence don’t even bother to try.

26.6  Limits of functions at points

Prototypical example for this section: limx→∞1∕x = 0.

We had also better define the notion of a limit of a real function, which (surprisingly) we haven’t actually defined yet. The definition will look like what we have seen before with continuity.

Definition 26.6.1. Let f : be a function2 and let p be a point in the domain. Suppose there exists a real number L such that:

For every 𝜀 > 0, there exists δ > 0 such that if |x− p| < δ and xp then |f (x )− L| < 𝜀.

Then we say L is the limit of f as x p, and write

 lim f(x) = L.
x→p

There is an important point here: in this definition we deliberately require that xp.

The value limxpf(x) does not depend on f(p), and accordingly we often do not even bother to define f(p).

Example 26.6.2 (Function with a hole)
Define the function f : by

      {
        3x    if x ⁄= 0
f(x) =
        2019  otherwise.

Then limx0f(x) = 0. The value f(0) = 2019 does not affect the limit. Obviously, because f(0) was made up to be some artificial value that did not agree with the limit, this function is discontinuous at x = 0.

Question 26.6.3 (Mandatory). Show that a function f is continuous at p if and only if limxpf(x) exists and equals f(p).

Example 26.6.4 (Less trivial example: a rational piecewise function)
Define the function f : as follows:

       (
       |{ 1  if x = 0
f(x) =   1  if x = p where q > 0 and gcd(p,q) = 1
       |( q        q
         0  if x ∕∈ ℚ.

For example, f(π) = 0, f(23) = 1
3, f(0.17) =  1
100-. Then

lim f (x ) = 0.
x→0

For example, if |x| < 1100 and x0 then f(x) is either zero (for x irrational) or else is at most 1101 (if x is rational).

As f(0) = 1, this function is also discontinuous at x = 0. However, if we change the definition so that f(0) = 0 instead, then f becomes continuous at 0.

Example 26.6.5 (Famous example)
Let f(x) = sinx-
 x, f : , where f(0) is assigned any value. Then

lix→m0 f (x ) = 1.

We will not prove this here, since I don’t want to get into trig yet. In general, I will basically only use trig functions for examples and not for any theory, so most properties of the trig functions will just be quoted.

Abuse of Notation 26.6.6 (The usual notation). From now on, the above example will usually be abbreviated to just

    sinx-
xli→m0   x  = 1.

The reason there is a slight abuse here is that I’m supposed to feed a function f into the limit, and instead I’ve written down an expression which is defined everywhere — except at x = 0. But that f(0) value doesn’t change anything. So the above means: “the limit of the function described by f(x) = sinx-
 x, except f(0) can be whatever it wants because it doesn’t matter”.

Remark 26.6.7 (For metric spaces) You might be surprised that I didn’t define the notion of limxpf(x) earlier for f : M N a function on metric spaces. We can actually do so as above, but there is one nuance: what if our metric space M is discrete, so p has no points nearby it? (Or even more simply, what if M is a one-point space?) We then cannot define limxpf(x) at all.

Thus if f : M N and we want to define limxpf(x), we have the requirement that p should have a point within 𝜀 of it, for any 𝜀 > 0. In other words, p should not be an isolated point.

As usual, there are no surprises with arithmetic, we have limxp(f(x)±g(x)) = limxpf(x)±limxpg(x), and so on and so forth. We have effectively done this proof before so we won’t repeat it again.

26.7  Limits of functions at infinity

Annoyingly, we actually have to make this definition separately, even though it will not feel any different from earlier examples.

Definition 26.7.1. Let f : . Suppose there exists a real number L such that:

For every 𝜀 > 0, there exists a constant M such that if x > M, then |f (x )− L| < 𝜀.

Then we say L is the limit of f as x approaches and write

xli→m∞f (x) = L.

The limit limx→−∞f(x) is defined similarly, with x > M replaced by x < M.

Fortunately, as is not an element of , we don’t have to do the same antics about f() like we had to do with “f(p) set arbitrarily”. So these examples can be more easily written down.

Example 26.7.2 (Limit at infinity)
The usual:

     1-
xl→im∞ x = 0.

I’ll even write out the proof: for any 𝜀 > 0, if x > 1∕𝜀 then |    |
|1x − 0| < 𝜀.

There are no surprises with arithmetic: we have limx→∞(f(x)±g(x)) = limx→∞f(x)±limxpg(x), and so on and so forth. This is about the fourth time I’ve mentioned this, so I will not say more.

26.8  A few harder problems to think about

Problem 26A. Define the sequence

              3
an = (− 1)n + nn
             2

for every positive integer n. Compute the limit infimum and the limit supremum.

Problem 26B. For which bounded sequences an does liminf nan = limsupnan?

Problem 26C (Comparison test). Let an and bn be two series. Assume bn is absolutely convergent, and |an|≤|bn| for all integers n. Prove that nan is absolutely convergent.

Problem 26D (Geometric series). Let 1 < r < 1 be a real number. Show that the series

1 + r + r2 + r3 + ...

converges absolutely and determine what it converges to.

Problem 26E (Alternating series test). Let a0 a1 a2 a3 be a weakly decreasing sequence of nonnegative real numbers, and assume that limn→∞an = 0. Show that the series n(1)nan is convergent (it need not be absolutely convergent).

Problem 26F ([?, Chapter 3, Exercise 55]).       PICLet (an)n1 and (bn)n1 be sequences of real numbers. Assume a1 a2 ⋅⋅⋅1000 and moreover that nbn converges. Prove that nanbn converges. (Note that in both the hypothesis and statement, we do not have absolute convergence.)

Problem 26G (Putnam 2016 B1).       PICLet x0,x1,x2, be the sequence such that x0 = 1 and for n 0,

            x
xn+1 = log(e n − xn)

(as usual, log is the natural logarithm). Prove that the infinite series x0 + x1 + converges and determine its value.

Problem 26H. Consider again the function f : in ??  defined by

       (
       |{ 1  if x = 0
f(x) =   1  if x = p where q > 0 and gcd(p,q) = 1
       |( q        q
         0  if x ∕∈ ℚ.

For every real number p, compute limxpf(x), if it exists. At which points is f continuous?

27  Bonus: A hint of p-adic numbers

This is a bonus chapter meant for those who have also read about rings and fields: it’s a nice tidbit at the intersection of algebra and analysis.

In this chapter, we are going to redo most of the previous chapter with the absolute value |− | replaced by the p-adic one. This will give us the p-adic integers p, and the p-adic numbers p. The one-sentence description is that these are “integers/rationals carrying full mod pe information” (and only that information).

In everything that follows p is always assumed to denote a prime. The first four sections will cover the founding definitions culminating in a short solution to a USA TST problem. We will then state (mostly without proof) some more surprising results about continuous functions f : p p; finally we close with the famous proof of the Skolem-Mahler-Lech theorem using p-adic analysis.

27.1  Motivation

Before really telling you what p and p are, let me tell you what you might expect them to do.

In elementary/olympiad number theory, we’re already well-familiar with the following two ideas:

Let me expand on the first point. Suppose we have some Diophantine equation. In olympiad contexts, one can take an equation modulo p to gain something else to work with. Unfortunately, taking modulo p loses some information: the reduction ∕p is far from injective.

If we want finer control, we could consider instead taking modulo p2, rather than taking modulo p. This can also give some new information (cubes modulo 9, anyone?), but it has the disadvantage that ∕p2 isn’t a field, so we lose a lot of the nice algebraic properties that we got if we take modulo p.

One of the goals of p-adic numbers is that we can get around these two issues I described. The p-adic numbers we introduce is going to have the following properties:

1.
You can “take modulo pe for all e at once”. In olympiad contexts, we are used to picking a particular modulus and then seeing what happens if we take that modulus. But with p-adic numbers, we won’t have to make that choice. An equation of p-adic numbers carries enough information to take modulo pe.
2.
The numbers p form a field, the nicest possible algebraic structure: 1∕p makes sense. Contrast this with ∕p2, which is not even an integral domain.
3.
It doesn’t lose as much information as taking modulo p does: rather than the surjective ∕p we have an injective map `→p.
4.
Despite this, you “ignore” some “irrelevant” data. Just like taking modulo p, you want to zoom-in on a particular type of algebraic information, and this means necessarily losing sight of other things.1

So, you can think of p-adic numbers as the right tool to use if you only really care about modulo pe information, but normal ∕pe isn’t quite powerful enough.

To be more concrete, I’ll give a poster example now:

Example 27.1.1 (USA TST 2002/2)
For a prime p, show the value of

       p∑−1    1
fp(x ) =    -------2- (mod  p3)
       k=1 (px+ k)

does not depend on x.

Here is a problem where we clearly only care about pe-type information. Yet it’s a nontrivial challenge to do the necessary manipulations mod p3 (try it!). The basic issue is that there is no good way to deal with the denominators modulo p3 (in part ∕p3 is not even an integral domain).

However, with p-adic analysis we’re going to be able to overcome these limitations and give a “straightforward” proof by using the identity

(       )− 2  ∑   (   ) (   )n
  1+  px-   =      − 2   px-  .
      k       n≥0   n     k

Such an identity makes no sense over or for convergence reasons, but it will work fine over the p, which is all we need.

27.2  Algebraic perspective

Prototypical example for this section: 12 = 1 + 3 + 32 + 33 + ⋅⋅⋅3.

We now construct p and p. I promised earlier that a p-adic integer will let you look at “all residues modulo pe” at once. This definition will formalize this.

27.2.i  Definition of p

Definition 27.2.1 (Introducing p). A p-adic integer is a sequence

x = (x1 mod p, x2 mod p2, x3 mod p3, ...)

of residues xe modulo pe for each integer e, satisfying the compatibility relations xi xj (mod pi) for i < j.

The set p of p-adic integers forms a ring under component-wise addition and multiplication.

Example 27.2.2 (Some 3-adic integers)
Let p = 3. Every usual integer n generates a (compatible) sequence of residues modulo pe for each e, so we can view each ordinary integer as p-adic one:

50 = (2 mod 3, 5 mod 9, 23 mod 27, 50 mod 81, 50 mod 243, ...).

On the other hand, there are sequences of residues which do not correspond to any usual integer despite satisfying compatibility relations, such as

(1 mod 3, 4 mod 9, 13 mod 27, 40 mod 81, ...)

which can be thought of as x = 1 + p + p2 + .

In this way we get an injective map

                (                             )
ℤ `→  ℤp    n ↦→   n mod p,n mod  p2,n mod  p3,...

which is not surjective. So there are more p-adic integers than usual integers.

(Remark for experts: those of you familiar with category theory might recognize that this definition can be written concisely as

ℤp := lim ℤ ∕peℤ
      ←−

where the inverse limit is taken across e 1.)

Exercise 27.2.3. Check that p is an integral domain.

27.2.ii  Base p expansion

Here is another way to think about p-adic integers using “base p”. As in the example earlier, every usual integer can be written in base p, for example

     -----
50 = 12123 = 2⋅30 + 1⋅ 31 + 2 ⋅32 + 1 ⋅33.

More generally, given any x = (x1,) p, we can write down a “base p” expansion in the sense that there are exactly p choices of xk given xk1. Continuing the example earlier, we would write

(1 mod  3, 4 mod 9, 13 mod 27, 40 mod 81, ...) = 1 + 3 + 32 +
= 11113

and in general we can write

    ∑
x =    akpk = ...a2a1a0p
    k≥0

where ak ∈{0,,p 1}, such that the equation holds modulo pe for each e. Note the expansion is infinite to the left, which is different from what you’re used to.

(Amusingly, negative integers also have infinite base p expansions: 4 = 2222123, corresponding to (2 mod 3, 5 mod 9, 23 mod 27, 77 mod 81).)

Thus you may often hear the advertisement that a p-adic integer is an “possibly infinite base p expansion”. This is correct, but later on we’ll be thinking of p in a more and more “analytic” way, and so I prefer to think of this as

p-adic integers are Taylor series with base p.

Indeed, much of your intuition from generating functions K[[X]] (where K is a field) will carry over to p.

27.2.iii  Constructing p

Here is one way in which your intuition from generating functions carries over:

Proposition 27.2.4 (Non-multiples of p are all invertible)
The number x p is invertible if and only if x10. In symbols,

x ∈ ℤ×p ⇐ ⇒  x ⁄≡ 0   (mod  p).

Contrast this with the corresponding statement for K[[X]]: a generating function F K[[X]] is invertible iff F(0)0.

Proof. If x 0 (mod p) then x1 = 0, so clearly not invertible. Otherwise, xe≢0 (mod p) for all e, so we can take an inverse ye modulo pe, with xeye 1 (mod pe). As the ye are themselves compatible, the element (y1,y2,) is an inverse. □

Example 27.2.5 (We have 1
2 = 11113 3)
We claim the earlier example is actually

1-
2 = (1 mod 3, 4 mod 9, 13 mod 27, 40 mod 81, ...) = 1 + 3 + 32 +
= 11113.

Indeed, multiplying it by 2 gives

(− 2 mod 3, − 8 mod 9, − 26 mod 27, − 80 mod 81, ...) = 1.

(Compare this with the “geometric series” 1 + 3 + 32 + ⋅⋅⋅ = 11−3. We’ll actually be able to formalize this later, but not yet.)

Remark 27.2.6 (12 is an integer for p > 2) The earlier proposition implies that 12 3 (among other things); your intuition about what is an “integer” is different here! In olympiad terms, we already knew 1
2 (mod 3) made sense, which is why calling 1
2 an “integer” in the 3-adics is correct, even though it doesn’t correspond to any element of .

Exercise 27.2.7 (Unimportant but tricky). Rational numbers correspond exactly to eventually periodic base p expansions.

With this observation, here is now the definition of p.

Definition 27.2.8 (Introducing p). Since p is an integral domain, we let p denote its field of fractions. These are the p-adic numbers.

Continuing our generating functions analogy:

ℤ  is to ℚ  as  K [[X ]] is to K ((X )).
 p       p

This means

p can be thought of as Laurent series with base p.

and in particular according to the earlier proposition we deduce:

Proposition 27.2.9 (p looks like formal Laurent series)
Every nonzero element of p is uniquely of the form

 k                        ×
p u     where k ∈ ℤ, u ∈ ℤ p .

Thus, continuing our base p analogy, elements of p are in bijection with “Laurent series”

 ∑      k   -----------------------
     akp  = ...a2a1a0.a− 1a −2...a−np
k≥− n

for ak {0,...,p − 1}. So the base p representations of elements of p can be thought of as the same as usual, but extending infinitely far to the left (rather than to the right).

Remark 27.2.10 (Warning) The field p has characteristic zero, not p.

Remark 27.2.11 (Warning on fraction field) This result implies that you shouldn’t think about elements of p as x∕y (for x,y p) in practice, even though this is the official definition (and what you’d expect from the name p). The only denominators you need are powers of p.

To keep pushing the formal Laurent series analogy, K((X)) is usually not thought of as quotient of generating functions but rather as “formal series with some negative exponents”. You should apply the same intuition on p.

Remark 27.2.12 — At this point I want to make a remark about the fact 1∕p p, connecting it to the wish-list of properties I had before. In elementary number theory you can take equations modulo p, but if you do the quantity n∕p mod p doesn’t make sense unless you know n mod p2. You can’t fix this by just taking modulo p2 since then you need n mod p3 to get n∕p mod p2, ad infinitum. You can work around issues like this, but the nice feature of p and p is that you have modulo pe information for “all e at once”: the information of x p packages all the modulo pe information simultaneously. So you can divide by p with no repercussions.

27.3  Analytic perspective

27.3.i  Definition

Up until now we’ve been thinking about things mostly algebraically, but moving forward it will be helpful to start using the language of analysis. Usually, two real numbers are considered “close” if they are close on the number of line, but for p-adic purposes we only care about modulo pe information. So, we’ll instead think of two elements of p or p as “close” if they differ by a large multiple of pe.

For this we’ll borrow the familiar νp from elementary number theory.

Definition 27.3.1 (p-adic valuation and absolute value). We define the p-adic valuation νp : p×in the following two equivalent ways:

By convention we set νp(0) = +. Finally, define the p-adic absolute value |∙|p by

|x| = p− νp(x).
   p

In particular |0 |p = 0.

This fulfills the promise that x and y are close if they look the same modulo pe for large e; in that case νp(x y) is large and accordingly |x− y |p is small.

27.3.ii  Ultrametric space

In this way, p and p becomes a metric space with metric given by |x − y|p.

Exercise 27.3.2. Suppose f : p p is continuous and f(n) = (1)n for every n 0. Prove that p = 2.

In fact, these spaces satisfy a stronger form of the triangle inequality than you are used to from .

Proposition 27.3.3 (|∙|p is an ultrametric)
For any x,y p, we have the strong triangle inequality

              {        }
|x+ y| ≤  max  |x| ,|y |  .
      p           p   p

Equality holds if (but not only if) |x|p|y|p.

However, p is more than just a metric space: it is a field, with its own addition and multiplication. This means we can do analysis just like in or : basically, any notion such as “continuous function”, “convergent series”, et cetera has a p-adic analog. In particular, we can define what it means for an infinite sum to converge:

Definition 27.3.4 (Convergence notions). Here are some examples of p-adic analogs of “real-world” notions.

With this definition in place, the “base p” discussion we had earlier is now true in the analytic sense: if x = a2a1a0p p then

 ∞
∑      k
   akp    converges to x.
k=0

Indeed, the difference between x and the nth partial sum is divisible by pn, hence the partial sums approach x as n →∞.

While the definitions are all the same, there are some changes in properties that should be true. For example, in p convergence of partial sums is simpler:

Proposition 27.3.5 (|xk|p 0 iff convergence of series)
A series k=1xk in p converges to some limit if and only if limk→∞|xk|p = 0.

Contrast this with -1
n = in . You can think of this as a consequence of strong triangle inequality.

Proof. By multiplying by a large enough power of p, we may assume xk p. (This isn’t actually necessary, but makes the notation nicer.)

Observe that xk (mod p) must eventually stabilize, since for large enough n we have |xn|p < 1 νp(xn) 1. So let a1 be the eventual residue modulo p of k=0Nxk (mod p) for large N. In the same way let a2 be the eventual residue modulo p2, and so on. Then one can check we approach the limit a = (a1,a2,). □

27.3.iii  More fun with geometric series

Let’s finally state the p-adic analog of the geometric series formula.

Proposition 27.3.6 (Geometric series)
Let x p with |x |p < 1. Then

--1--=  1+ x + x2 + x3 + ....
1− x

Proof. Note that the partial sums satisfy 1 + x + x2 + ⋅⋅⋅ + xn = 1−xn
 1− x, and xn 0 as n →∞ since |x|p < 1. □

So, 1 + 3 + 32 + ⋅⋅⋅ = 1
2 is really a correct convergence in 3. And so on.

If you buy the analogy that p is generating functions with base p, then all the olympiad generating functions you might be used to have p-adic analogs. For example, you can prove more generally that:

Theorem 27.3.7 (Generalized binomial theorem)
If x p and |x|p < 1, then for any r we have the series convergence

∑   (  )
     r  xn = (1+  x)r.
n≥0  n

(I haven’t defined (1 + x)r, but it has the properties you expect.)

27.3.iv  Completeness

Note that the definition of |∙|p could have been given for as well; we didn’t need p to introduce it (after all, we have νp in olympiads already). The big important theorem I must state now is:

Theorem 27.3.8 (p is complete)
The space p is the completion of with respect to |∙|p.

This is the definition of p you’ll see more frequently; one then defines p in terms of p (rather than vice-versa) according to

     {               }
ℤp =  x ∈ ℚp : |x|p ≤ 1 .

27.3.v  Philosophical notes

Let me justify why this definition is philosophically nice. Suppose you are an ancient Greek mathematician who is given:

Problem for Ancient Greeks. Estimate the value of the sum

     1    1           1
S = 12 + 22 + ⋅⋅⋅+ 100002-

to within 0.001.

The sum S consists entirely of rational numbers, so the problem statement would be fair game for ancient Greece. But it turns out that in order to get a good estimate, it really helps if you know about the real numbers: because then you can construct the infinite series n1n2 = 1
6π2, and deduce that S π2
6, up to some small error term from the terms past   1
100012, which can be bounded.

Of course, in order to have access to enough theory to prove that S = π26, you need to have the real numbers; it’s impossible to do calculus in (the sequence 1, 1.4, 1.41, 1.414, is considered “not convergent”!)

Now fast-forward to 2002, and suppose you are given

Problem from USA TST 2002. Estimate the sum

       p−1
f (x ) = ∑  ----1----
 p     k=1 (px+ k )2

to within mod p3.

Even though fp(x) is a rational number, it still helps to be able to do analysis with infinite sums, and then bound the error term (i.e. take mod p3). But the space is not complete with respect to |∙|p either, and thus it makes sense to work in the completion of with respect to |∙|p. This is exactly p.

In any case, let’s finally solve ?? .

Example 27.3.9 (USA TST 2002)
We will now compute

        p∑−1    1
fp(x) =    --------2  (mod  p3).
        k=1(px + k)

Armed with the generalized binomial theorem, this becomes straightforward.

fp(x) = k=1p1---1-----
(px + k)2 = k=1p11--
k2(    px-)
  1+  k2
= k=1p11--
k2 n0(   )
 − 2
  n(   )
 px-
  kn
= n0(   )
 − 2
  n k=1p1 1
k2-(x )
 k-npn
k=1p11--
k2 2x(p −1   )
  ∑  -1-
     k3
  k=1p + 3x2(p−1   )
 ∑   1--
     k4
 k=1p2 (mod p3).

Using the elementary facts that p2 kk3 and p kk4, this solves the problem.

27.4  Mahler coefficients

One of the big surprises of p-adic analysis is that:

We can basically describe all continuous functions p p.

They are given by a basis of functions

(x )    x(x − 1)...(x− (n − 1))
  n  := -----------n!----------

in the following way.

Theorem 27.4.1 (Mahler; see [?, Theorem 51.1, Exercise 51.b])
Let f : p p be continuous, and define

     ∑n  ( )
an =      n  (− 1)n−kf (k ).
     k=0  k
(27.1)

Then limnan = 0 and

       ∑    (  )
f(x) =    a   x  .
       n≥0 n  n

Conversely, if an is any sequence converging to zero, then f(x) = n0an(x)
 n defines a continuous function satisfying (27.1).

The ai are called the Mahler coefficients of f.

Exercise 27.4.2. Last post we proved that if f : p p is continuous and f(n) = (1)n for every n 0 then p = 2. Re-prove this using Mahler’s theorem, and this time show conversely that a unique such f exists when p = 2.

You’ll note that these are the same finite differences that one uses on polynomials in high school math contests, which is why they are also called “Mahler differences”.

a0 = f(0)
a1 = f(1) f(0)
a2 = f(2) 2f(1) + f(0)
a3 = f(3) 3f(2) + 3f(1) f(0).

Thus one can think of an 0 as saying that the values of f(0), f(1), …behave like a polynomial modulo pe for every e 0.

The notion “analytic” also has a Mahler interpretation. First, the definition.

Definition 27.4.3. We say that a function f : p p is analytic if it has a power series expansion

∑     n
   cnx   cn ∈ ℚp     converging for x ∈ ℤp.
n≥0

Theorem 27.4.4 ([?, Theorem 54.4])
The function f(x) = n0an(x)
 n is analytic if and only if

 lim  an-= 0.
n→∞  n!

Analytic functions also satisfy the following niceness result:

Theorem 27.4.5 (Strassmann’s theorem)
Let f : p p be analytic. Then f has finitely many zeros.

To give an application of these results, we will prove the following result, which was interesting even before p-adics came along!

Theorem 27.4.6 (Skolem-Mahler-Lech)
Let (xi)i0 be an integral linear recurrence, meaning (xi)i0 is a sequence of integers

xn = c1xn −1 + c2xn− 2 + ⋅⋅⋅+ ckxn−k  n = 1,2,...

holds for some choice of integers c1, …, ck. Then the set of indices {ixi = 0} is eventually periodic.

Proof. According to the theory of linear recurrences, there exists a matrix A such that we can write xi as a dot product

     ⟨     ⟩
xi =  Aiu,v  .

Let p be a prime not dividing detA. Let T be an integer such that AT id (mod p) (with id denoting the identity matrix).

Fix any 0 r < N. We will prove that either all the terms

f(n) = xnT+r     n = 0,1,...

are zero, or at most finitely many of them are. This will conclude the proof.

Let AT = id + pB for some integer matrix B. We have

f(n) = ⟨  nT+r    ⟩
 A     u,v =          n  r
⟨(id+  pB) A  u,v⟩
= k0( )
 n
 kpn  n  r
⟨B A  u,v⟩
= k0an(n )

 k where an = pn⟨BnAru,  v⟩pn.

Thus we have written f in Mahler form. Initially, we define f : 0 , but by Mahler’s theorem (since limnan = 0) it follows that f extends to a function f : p p. Also, we can check that limnann! = 0 hence f is even analytic.

Thus by Strassman’s theorem, f is either identically zero, or else it has finitely many zeros, as desired. □

27.5  A few harder problems to think about

Problem 27A (p is compact). Show that p is not compact, but p is. (For the latter, I recommend using sequential continuity.)

Problem 27B (Totally disconnected). Show that both p and p are totally disconnected: there are no connected sets other than the empty set and singleton sets.

Problem 27C (USA TST 2011). Let p be a prime. We say that a sequence of integers {zn}n=0 is a p-pod if for each e 0, there is an N 0 such that whenever m N, pe divides the sum

∑m      (m )
   (− 1)k    zk.
k=0       k

Prove that if both sequences {xn}n=0 and {yn}n=0 are p-pods, then the sequence {xnyn}n=0 is a p-pod.

28  Differentiation

28.1  Definition

Prototypical example for this section: x3 has derivative 3x2.

I suspect most of you have seen this before, but:

Definition 28.1.1. Let U be an open subset1 of and let f : U be a function. Let p U. We say f is differentiable at p if the limit2

   f (p+ h) − f(p)
lim ---------------
h→0        h

exists. If so, we denote its value by f(p) and refer to this as the derivative of f at p.

The function f is differentiable if it is differentiable at every point. In that case, we regard the derivative f: (a,b) as a function it its own right.

Exercise 28.1.2. Show that if f is differentiable at p then it is continuous at p too.

Here is the picture. Suppose f : is differentiable (hence continuous). We draw a graph of f in the usual way and consider values of h. For any nonzero h, what we get is the slope of the secant line joining (p,f(p)) to (p + h,f(p + h)). However, as h gets close to zero, that secant line begins to approach a line which is tangent to the graph of the curve. A picture with f a parabola is shown below, with the tangent in red, and the secant in dashed green.

So the picture in your head should be that

f(p) looks like the slope of the tangent line at (p,f(p)).

Remark 28.1.3 — Note that the derivatives are defined for functions on open intervals. This is important. If f : [a,b] for example, we could still define the derivative at each interior point, but f(a) no longer makes sense since f is not given a value on any open neighborhood of a.

Let’s do one computation and get on with this.

Example 28.1.4 (Derivative of x3 is 3x2)
Let f : by f(x) = x3. For any point p, and nonzero h we can compute

f-(p+-h)-−-f(p)
       h = (p+-h)3-−-p3
     h
= 3p2h + 3ph2 + h3
----------------
       h
= 3p2 + 3ph + h2.

Thus,

lim  f(p+-h)-−-f(p)-= lim (3p2 + 3ph+ h2) = 3p2.
h→0       h          h→0

Thus the slope at each point of f is given by the formula 3p2. It is customary to then write f(x) = 3x2 as the derivative of the entire function f.

Abuse of Notation 28.1.5. We will now be sloppy and write this as (x3)= 3x2. This is shorthand for the significantly more verbose “the real-valued function x3 on domain so-and-so has derivative 3p2 at every point p in its domain”.

In general, a real-valued differentiable function f : U naturally gives rise to derivative f(p) at every point p U, so it is customary to just give up on p altogether and treat fas function itself U , even though this real number is of a “different interpretation”: f(p) is meant to interpret a slope (e.g. your hourly pay rate) as opposed to a value (e.g. your total dollar worth at time t). If f is a function from real life, the units do not even match!

This convention is so deeply entrenched I cannot uproot it without more confusion than it is worth. But if you read the chapters on multivariable calculus you will see how it comes back to bite us, when I need to re-define the derivative to be a linear map, rather than single real numbers.

28.2  How to compute them

Same old, right? Sum rule, all that jazz.

Theorem 28.2.1 (Your friendly high school calculus rules)
In what follows f and g are differentiable functions, and U, V are open subsets of .

Proof.

Exercise 28.2.2. Compute the derivative of the polynomial f(x) = x3 +10x2 +2019, viewed as a function f : .

Remark 28.2.3 — Quick linguistic point: the theorems above all hold at each individual point. For example the sum rule really should say that if f,g: U are differentiable at the point p then so is f + g and the derivative equals f(p) + g(p). Thus f and g are differentiable on all of U, then it of course follows that (f + g)= f+ g. So each of the above rules has a “point-by-point” form which then implies the “whole U” form.

We only state the latter since that is what is used in practice. However, in the rare situations where you have a function differentiable only at certain points of U rather than the whole interval U, you can still use the below.

We next list some derivatives of well-known functions, but as we do not give rigorous definitions of these functions, we do not prove these here.

Proposition 28.2.4 (Derivatives of some well-known functions)

Example 28.2.5 (A typical high-school calculus question)
This means that you can mechanically compute the derivatives of any artificial function obtained by using the above, which makes it a great source of busy work in American high schools and universities. For example, if

        x         2
f (x) = e + xsin(x )    f: ℝ → ℝ

then one can compute fby:

f(x) = (ex)+ (xsin(x2)) sum rule
= ex + (xsin(x2)) above table
= ex + (x)sin(x2) + x(sin(x2))product rule
= ex + sin(x2) + x(sin(x2)) (x)= 1
= ex + sin(x2) + x 2x cos(x2) chain rule.

Of course, this function f is totally artificial and has no meaning, which is why calculus is the topic of widespread scorn in the USA. That said, it is worth appreciating that calculations like this are possible: it would be better to write the pseudo-theorem “derivatives can actually be computed”.

If we take for granted that (ex)= ex, then we can derive two more useful functions to add to our library of functions we can differentiate.

Corollary 28.2.6 (Power rule)
Let r be a real number. The function >0 by x↦→xr has derivative (xr)= rxr1.

Proof. We knew this for integers r already, but now we can prove it for any positive real number r. Write

f(x) = xr = erlogx

considered as a function f : >0 . The chain rule (together with the fact that (ex)= ex) now gives

f(x) = er log x (r log x)
= er log x r-
x = xr r-
x = rxr1.

The reason we don’t prove the formulas for ex and log x is that we don’t at the moment even have a rigorous definition for either, or even for 2x if x is not rational. However it’s nice to know that some things imply the other. □

Corollary 28.2.7 (Derivative of log is 1∕x)
The function log: >0 has derivative (log x)= 1∕x.

Proof. We have that x = elog x. Differentiate both sides, and again use the chain rule3

     log x       ′
1 = e    ⋅(log x).

Thus (log x)= --1-
elogx = 1∕x. □

28.3  Local (and global) maximums

Prototypical example for this section: Horizontal tangent lines to the parabola are typically good pictures.

You may remember from high school that one classical use of calculus was to extract the minimum or maximum values of functions. We will give a rigorous description of how to do this here.

Definition 28.3.1. Let f : U be a function. A local maximum is a point p U such that there exists an open neighborhood V of p (contained inside U) such that f(p) f(x) for every x V .

A local minimum is defined similarly.4

Definition 28.3.2. A point p is a local extrema if it satisfies either of these.

The nice thing about derivatives is that they pick up all extrema.

Theorem 28.3.3 (Fermat’s theorem on stationary points)
Suppose f : U is differentiable and p U is a local extrema. Then f(p) = 0.

If you draw a picture, this result is not surprising.

(Note also: the converse is not true. Say, f(x) = x2019 has f(0) = 0 but x = 0 is not a local extrema for f.)

Proof. Assume for contradiction f(p) > 0. Choose any 𝜀 > 0 with 𝜀 < f(p). Then for sufficiently small |h| we should have

f-(p-+-h)-−-f(p)
       h       >  𝜀.

In particular f(p + h) > f(p) for h > 0 while f(p h) < f(p) for h < 0. So p is not a local extremum.

The proof for f(p) < 0 is similar. □

However, this is not actually adequate if we want a complete method for optimization. The issue is that we seek global extrema, which may not even exist: for example f(x) = x (which has f(x) = 1) obviously has no local extrema at all. The key to resolving this is to use compactness: we change the domain to be a compact set Z, for which we know that f will achieve some global maximum. The set Z will naturally have some interior S, and calculus will give us all the extrema within S. Then we manually check all cases outside Z.

Let’s see two extended examples. The one is simple, and you probably already know about it, but I want to show you how to use compactness to argue thoroughly, and how the “boundary” points naturally show up.

Example 28.3.4 (Rectangle area optimization)
Suppose we consider rectangles with perimeter 20 and want the rectangle with the smallest or largest area.

If we choose the legs of the rectangle to be x and 10 x, then we are trying to optimize the function

f(x) = x(10− x ) = 10x − x2   f : [0,10 ] → ℝ.

By compactness, there exists some global maximum and some global minimum.

As f is differentiable on (0,10), we find that for any p (0,10), a global maximum will be a local maximum too, and hence should satisfy

     ′
0 = f (p) = 10 − 2p =⇒  p = 5.

Also, the points x = 0 and x = 10 lie in the domain but not the interior (0,10). Therefore the global extrema (in addition to existing) must be among the three suspects {0,5,10}.

We finally check f(0) = 0, f(5) = 25, f(10) = 0. So the 5 × 5 square has the largest area and the degenerate rectangles have the smallest (zero) area.

Here is a non-elementary example.

Proposition 28.3.5 (ex 1 + x)
For all real numbers x we have ex 1 + x.

Proof. Define the differentiable function

        x
f(x) = e − (x + 1)    f: ℝ → ℝ.

Consider the compact interval Z = [1,100]. If x ≤ −1 then obviously f(x) > 0. Similarly if x 100 then obviously f(x) > 0 too. So we just want to prove that if x Z, we have f(x) 0.

Indeed, there exists some global minimum p. It could be the endpoints 1 or 100. Otherwise, if it lies in U = (1,100) then it would have to satisfy

0 = f′(p) = ep − 1 =⇒  p = 0.

As f(1) > 0, f(100) > 0, f(0) = 0, we conclude p = 0 is the global minimum of Z; and hence f(x) 0 for all x Z, hence for all x. □

Remark 28.3.6 — If you are willing to use limits at ±∞, you can rewrite proofs like the above in such a way that you don’t have to explicitly come up with endpoints like 1 or 100. We won’t do so here, but it’s nice food for thought.

28.4  Rolle and friends

Prototypical example for this section: The racetrack principle, perhaps?

One corollary of the work in the previous section is Rolle’s theorem.

Theorem 28.4.1 (Rolle’s theorem)
Suppose f : [a,b] is a continuous function, which is differentiable on the open interval (a,b), such that f(a) = f(b). Then there is a point c (a,b) such that f(c) = 0.

Proof. Assume f is nonconstant (otherwise any c works). By compactness, there exists both a global maximum and minimum. As f(a) = f(b), either the global maximum or the global minimum must lie inside the open interval (a,b), and then Fermat’s theorem on stationary points finishes. □

I was going to draw a picture until I realized xkcd #2042 has one already.

PIC
Image from [?]

One can adapt the theorem as follows.

Theorem 28.4.2 (Mean value theorem)
Suppose f : [a,b] is a continuous function, which is differentiable on the open interval (a,b). Then there is a point c (a,b) such that

f ′(c) = f(b)−-f(a).
           b− a

Pictorially, there is a c such that the tangent at c has the same slope as the secant joining (a,f(a)), to (b,f(b)); and Rolle’s theorem is the special case where that secant is horizontal.

Proof of mean value theorem. Let s = f(b)−f(a)-
  b−a be the slope of the secant line, and define

g(x) = f(x)− sx

which intuitively shears f downwards so that the secant becomes vertical. In fact g(a) = g(b) now, so we apply Rolle’s theorem to g. □

Remark 28.4.3 (For people with driver’s licenses) There is a nice real-life interpretation of this I should mention. A car is travelling along a one-dimensional road (with f(t) denoting the position at time t). Suppose you cover 900 kilometers in your car over the course of 5 hours (say f(0) = 0, f(5) = 900). Then there is some point at time in which your speed at that moment was exactly 180 kilometers per hour, and so you cannot really complain when the cops pull you over for speeding.

The mean value theorem is important because it lets you relate use derivative information to get information about the function in a way that is really not possible without it. Here is one quick application to illustrate my point:

Proposition 28.4.4 (Racetrack principle)
Let f,g: be two differentiable functions with f(0) = g(0).

(a)
If f(x) g(x) for every x > 0, then f(x) g(x) for every x > 0.
(b)
If f(x) > g(x) for every x > 0, then f(x) > g(x) for every x > 0.

This proposition might seem obvious. You can think of it as a race track for a reason: if f and g denote the positions of two cars (or horses etc) and the first car is always faster than the second car, then the first car should end up ahead of the second car. At a special case g = 0, this says that if f(x) 0, i.e. “f is increasing”, then, well, f(x) f(0) for x > 0, which had better be true. However, if you try to prove this by definition from derivatives, you will find that it is not easy! However, it’s almost a prototype for the mean value theorem.

Proof of racetrack principle. We prove (a). Let h = f g, so h(0) = 0. Assume for contradiction h(p) < 0 for some p > 0. Then the secant joining (0,h(0)) to (p,h(p)) has negative slope; in other words by mean value theorem there is a 0 < c < p such that

  ′     ′      ′     h-(p-)−-h(0)   h(p)
f (c)− g (c) = h (c) =    p      =  p   < 0

so f(c) < g(c), contradiction. Part (b) is the same. □

Sometimes you will be faced with two functions which you cannot easily decouple; the following form may be more useful in that case.

Theorem 28.4.5 (Ratio mean value theorem)
Let f,g: [a,b] be two continuous functions which are differentiable on (a,b), and such that g(a)g(b). Then there is a c (a,b) such that g(c)0 and

  ′
f-(c) = f(b)−-f-(a)-.
 g′(c)   g(b)− g(a)

Proof. Use Rolle’s theorem on the function

h (x ) = [f(x)− f (a)][g(b)− g(a)]− [g (x )− g(a)][f(b)− f (a)].         -|

Remark 28.4.6 — You can capture the case g(a) = g(b) as well if you are willing to write the conclusion in the less intuitive form g(c)[f (b) − f(a)] = f(c)[g(b)− g(a)]. In the event g(a) = g(b) then this is just the mean value theorem for g, and the data of f is irrelevant.

28.5  Smooth functions

Prototypical example for this section: All the functions you’re used to.

Let f : U be differentiable, thus giving us a function f: U . If our initial function was nice enough, then we can take the derivative again, giving a function f′′: U , and so on. In general, after taking the derivative n times, we denote the resulting function by f(n). By convention, f(0) = f.

Definition 28.5.1. A function f : U is smooth if it is infinitely differentiable; that is the function f(n) exists for all n.

Question 28.5.2. Show that the absolute value function is not smooth.

Most of the functions we encounter, such as polynomials, ex, log, sin, cos are smooth, and so are their compositions. Here is a weird example which we’ll grow more next time.

Example 28.5.3 (A smooth function with all derivatives zero)
Consider the function

       {
         e−1∕x  x > 0
f(x) =
         0     x ≤ 0.

This function can be shown to be smooth, with f(n)(0) = 0. So this function has every derivative at the origin equal to zero, despite being nonconstant!

28.6  A few harder problems to think about

Problem 28A (Quotient rule). Let f : (a,b) and g: (a,b) >0 be differentiable functions. Let h = f∕g be their quotient (also a function (a,b) ). Show that the derivative of h is given by

         ′             ′
h′(x) = f-(x)g(x)−-f-(x-)g-(x).
               g(x)2

Problem 28B. For real numbers x > 0, how small can xx be?

Problem 28C (RMM 2018).       PICDetermine whether or not there exist nonconstant polynomials P(x) and Q(x) with real coefficients satisfying

    10       9       21       20
P(x)  + P (x) = Q (x)  + Q (x) .

Problem 28D.       PICLet P(x) be a degree n polynomial with real coefficients. Prove that the equation ex = P(x) has at most n + 1 real solutions in x.

Problem 28E (Jensen’s inequality). Let f : (a,b) be a twice differentiable function such that f′′(x) 0 for all x (i.e. f is convex). Prove that

 (      )
f  x-+-y  ≤  f(x)+-f-(y-)
     2            2

for all real numbers x and y in the interval (a,b).

Problem 28F (L’Hôpital rule, or at least one case). Let f,g: be differentiable functions and let p be a real number. Suppose that

 lim f(x) = lim  g(x) = 0.
x→p        x→p

Prove that

    f(x)       f′(x)
lim  -----= lim  -′---
x→p g(x)   x→p g(x)

provided the right-hand limit exists.

29  Power series and Taylor series

Polynomials are very well-behaved functions, and are studied extensively for that reason. From an analytic perspective, for example, they are smooth, and their derivatives are easy to compute.

In this chapter we will study power series, which are literally “infinite polynomials” nanxn. Armed with our understanding of series and differentiation, we will see three great things:

29.1  Motivation

To get the ball rolling, let’s start with one infinite polynomial you’ll recognize: for any fixed number 1 < x < 1 we have the series convergence

--1--= 1 + x + x2 + ...
1− x

by the geometric series formula.

Let’s pretend we didn’t see this already in ?? . So, we instead have a smooth function f : (1,1) by

f (x ) =--1--.
       1 − x

Suppose we wanted to pretend that it was equal to an “infinite polynomial” near the origin, that is

       −1                2      3     4
(1−  x)  =  a0 + a1x + a2x + a3x + a4x  + ....

How could we find that polynomial, if we didn’t already know?

Well, for starters we can first note that by plugging in x = 0 we obviously want a0 = 1.

We have derivatives, so actually, we can then differentiate both sides to obtain that

      −2                   2      3
(1 − x)   = a1 + 2a2x+ 3a3x  + 4a4x .

If we now set x = 0, we get a1 = 1. In fact, let’s keep taking derivatives and see what we get.

(1 x)1 = a 0 + a1x + a2x2 + a 3x3 + a 4x4 + a 5x5 +
(1 x)2 = a 1 + 2a2x + 3a3x2 + 4a 4x3 + 5a 5x4 +
2(1 x)3 = 2a 2 + 6a3x + 12a4x2 + 20a 5x3 +
6(1 x)4 = 6a 3 + 24a4x + 60a5x2 +
24(1 x)5 = 24a 4 + 120a5x +
.
...

If we set x = 0 we find 1 = a0 = a1 = a2 = which is what we expect; the geometric series --1
1− x = 1 + x + x2 + . And so actually taking derivatives was enough to get the right claim!

29.2  Power series

Prototypical example for this section: -1-
1−z = 1 + z + z2 + , which converges on (1,1).

Of course this is not rigorous, since we haven’t described what the right-hand side is, much less show that it can be differentiated term by term. So we define the main character now.

Definition 29.2.1. A power series is a sum of the form

∑∞
   anzn = a0 + a1z + a2z2 + ...
n=0

where a0, a1, … are real numbers, and z is a variable.

Abuse of Notation 29.2.2 (00 = 1). If you are very careful, you might notice that when z = 0 and n = 0 we find 00 terms appearing. For this chapter the convention is that they are all equal to one.

Now, if I plug in a particular real number h, then I get a series of real numbers n=0anhn. So I can ask, when does this series converge? It terms out there is a precise answer for this.

Definition 29.2.3. Given a power series n=0anzn, the radius of convergence R is defined by the formula

1-= lim sup |an|1∕n.
R    n→ ∞

with the convention that R = 0 if the right-hand side is , and R = if the right-hand side is zero.

Theorem 29.2.4 (Cauchy-Hadamard theorem)
Let n=0anzn be a power series with radius of convergence R. Let h be a real number, and consider the infinite series

∞∑
   anhn
n=0

of real numbers. Then:

Proof. This is not actually hard, but it won’t be essential, so not included. □

Remark 29.2.5 — In the case |h| = R, it could go either way.

Example 29.2.6 ( zn has radius 1)
Consider the geometric series nzn = 1 + z + z2 + . Since an = 1 for every n, we get R = 1, which is what we expected.

Therefore, if nanzn is a power series with a nonzero radius R > 0 of convergence, then it can also be thought of as a function

                         ∑
(− R,R ) → ℝ   by   h ↦→     anhn.
                         n≥0

This is great. Note also that if R = , this means we get a function .

Abuse of Notation 29.2.7 (Power series vs. functions). There is some subtly going on with “types” of objects again. Analogies with polynomials can help.

Consider P(x) = x3 + 7x + 9, a polynomial. You can, for any real number h, plug in P(h) to get a real number. However, in the polynomial itself, the symbol x is supposed to be a variable — which sometimes we will plug in a real number for, but that happens only after the polynomial is defined.

Despite this, “the polynomial p(x) = x3 + 7x + 9” (which can be thought of as the coefficients) and “the real-valued function x↦→x3+7x+9” are often used interchangeably. The same is about to happen with power series: while they were initially thought of as a sequence of coefficients, the Cauchy-Hadamard theorem lets us think of them as functions too, and thus we blur the distinction between them.

29.3  Differentiating them

Prototypical example for this section: We saw earlier 1 + x + x2 + x3 + has derivative 1 + 2x + 3x2 + .

As promised, differentiation works exactly as you want.

Theorem 29.3.1 (Differentiation works term by term)
Let n0anzn be a power series with radius of convergence R > 0, and consider the corresponding function

                             ∑
f : (− R,R ) → ℝ by   f(x) =    anxn.
                            n≥0

Then all the derivatives of f exist and are given by power series

f(x) = n1nanxn1
f′′(x) = n2n(n 1)anxn2
.
..

which also converge for any x (R,R). In particular, f is smooth.

Proof. Also omitted. The right way to prove it is to define the notion “converges uniformly”, and strengthen Cauchy-Hadamard to have this is as a conclusion as well. However, we won’t use this later. □

Corollary 29.3.2 (A description of power series coefficients)
Let n0anzn be a power series with radius of convergence R > 0, and consider the corresponding function f(x) as above. Then

       (n)
an =  f--(x).
        n!

Proof. Take the nth derivative and plug in x = 0. □

29.4  Analytic functions

Prototypical example for this section: The piecewise e1∕x or 0 function is not analytic, but is smooth.

With all these nice results about power series, we now have a way to do this process the other way: suppose that f : U is a function. Can we express it as a power series?

Functions for which this is true are called analytic.

Definition 29.4.1. A function f : U is analytic at the point p U if there exists an open neighborhood V of p (inside U) and a power series nanzn such that

       ∑           n
f(x) =    an(x − p)
       n≥0

for any x V . As usual, the whole function is analytic if it is analytic at each point.

Question 29.4.2. Show that if f is analytic, then it’s smooth.

Moreover, if f is analytic, then by the corollary above its coefficients are actually described exactly by

       ∑   f(n)(p)-      n
f(x) =       n!  (x − p) .
       n≥0

Even if f is smooth but not analytic, we can at least write down the power series; we give this a name.

Definition 29.4.3. For smooth f, the power series n0 (n)
f-n(p!)-zn is called the Taylor series of f at p.

Example 29.4.4 (Examples of analytic functions)

(a)
Polynomials, sin, cos, ex, log all turn out to be analytic.
(b)
The smooth function from before defined by
       {
        exp (− 1∕x)  x > 0
f(x) =  0           x ≤ 0

is not analytic. Indeed, suppose for contradiction it was. As all the derivatives are zero, its Taylor series would be 0 + 0x + 0x2 + . This Taylor series does converge, but not to the right value — as f(𝜀) > 0 for any 𝜀 > 0, contradiction.

Theorem 29.4.5 (Analytic iff Taylor series has positive radius)
Let f : U be a smooth function. Then f is analytic if and only if for any point p U, its Taylor series at p has positive radius of convergence.

Example 29.4.6
It now follows that f(x) = sin(x) is analytic. To see that, we can compute

f(0) = sin0 = 0
f(0) = cos0 = 1
f′′(0) = sin0 = 0
f(3)(0) = cos0 = 1
f(4)(0) = sin0 = 0
f(5)(0) = cos0 = 1
f(6)(0) = sin0 = 0
..
.

and so by continuing the pattern (which repeats every four) we find the Taylor series is

    z3   z5   z7
z − 3! + 5! − 7! + ...

which is seen to have radius of convergence R = .

Like with differentiable functions:

Proposition 29.4.7 (All your usual closure properties for analytic functions)
The sums, products, compositions, nonzero quotients of analytic functions are analytic.

The upshot of this is is that most of your usual functions that occur in nature, or even artificial ones like f(x) = ex + xsin(x2), will be analytic, hence describable locally by Taylor series.

29.5  A definition of Euler’s constant and exponentiation

We can actually give a definition of ex using the tools we have now.

Definition 29.5.1. We define the map exp: by using the following power series, which has infinite radius of convergence:

         ∑  xn
exp(x) =    -n!.
         n≥0

We then define Euler’s constant as e = exp(1).

Question 29.5.2. Show that under this definition, exp= exp.

We are then settled with:

Proposition 29.5.3 (exp is multiplicative)
Under this definition,

exp(x + y) = exp (x )exp(y).

Idea of proof. There is some subtlety here with switching the order of summation that we won’t address. Modulo that:

exp(x)exp(y) = n0xn
-n! m0ym
-m! = n0 m0xn
n!-ym
m!-
= k0 m+n=k
m,n≥0xnym
-----
n!m! = k0 m+n=k
m,n≥0(k )

 nxnym
-----
  k!
= k0(x-+-y)k
   k! = exp(x + y).

Corollary 29.5.4 (exp is positive)

(a)
We have exp(x) > 0 for any real number x.
(b)
The function exp is strictly increasing.

Proof. First

exp(x) = exp(x∕2)2 ≥ 0

which shows exp is nonnegative. Also, 1 = exp(0) = exp(x)exp(x) implies exp(x)0 for any x, proving (a).

(b) is just since expis strictly positive (racetrack principle). □

The log function then comes after.

Definition 29.5.5. We may define log: >0 to be the inverse function of exp.

Since its derivative is 1∕x it is smooth; and then one may compute its coefficients to show it is analytic.

Note that this actually gives us a rigorous way to define ar for any a > 0 and r > 0, namely

ar := exp (rloga).

29.6  This all works over complex numbers as well, except also complex analysis is heaven

We now mention that every theorem we referred to above holds equally well if we work over , with essentially no modifications.

In particular, we can now even define complex exponentials, giving us a function

exp : ℂ → ℂ

since the power series still has R = . More generally if a > 0 and z we may still define

az := exp(zloga).

(We still require the base a to be a positive real so that log a is defined, though. So this ii issue is still there.)

However, if one tries to study calculus for complex functions as we did for the real case, in addition to most results carrying over, we run into a huge surprise:

If f : is differentiable, it is analytic.

And this is just the beginning of the nearly unbelievable results that hold for complex analytic functions. But this is the part on real analysis, so you will have to read about this later!

29.7  A few harder problems to think about

Problem 29A. Find the Taylor series of log(1 x).

Problem 29B (Euler formula). Show that

exp(i𝜃) = cos 𝜃 + isin𝜃

for any real number 𝜃.

Problem 29C (Taylor’s theorem, Lagrange form). Let f : [a,b] be continuous and n + 1 times differentiable on (a,b). Define

     ∑n f (k)(b)       k
Pn =    --k!---⋅(b− a) .
     k=0

Prove that there exists ξ (a,b) such that

 (n)              f(b)− Pn
f  (ξ) = (n+ 1)!⋅(b-−-a)n+1.

This generalizes the mean value theorem (which is the special case n = 0, where P0 = f(a)).

Problem 29D (Putnam 2018 A5).     PICPICLet f : be smooth, and assume that f(0) = 0, f(1) = 1, and f(x) 0 for every real number x. Prove that f(n)(x) < 0 for some positive integer n and real number x.

30  Riemann integrals

“Trying to Riemann integrate discontinuous functions is kind of outdated.”
— Dennis Gaitsgory, [?]

We will go ahead and define the Riemann integral, but we won’t do very much with it. The reason is that the Lebesgue integral is basically better, so we will define it, check the fundamental theorem of calculus (or rather, leave it as a problem at the end of the chapter), and then always use Lebesgue integrals forever after.

30.1  Uniform continuity

Prototypical example for this section: f(x) = x2 is not uniformly continuous on , but functions on compact sets are always uniformly continuous.

Definition 30.1.1. Let f : M N be a continuous map between two metric spaces. We say that f is uniformly continuous if for all 𝜀 > 0 there exists a δ > 0 such that

dM(p,q) < δ =⇒  dN (f(p),f(q)) < 𝜀.

This difference is that given an 𝜀 > 0 we must specify a δ > 0 which works for every choice p and q of inputs; whereas usually δ is allowed to depend on p and q. (Also, this definition can’t be ported to a general topological space.)

Example 30.1.2 (Uniform continuity failure)

(a)
The function f : by x↦→x2 is not uniformly continuous. Suppose we take 𝜀 = 0.1 for example. There is no δ such that if |xy| < δ then |x2 y2| < 0.1, since as x and y get large, the function f becomes increasingly sensitive to small changes.
(b)
The function (0,1) by x↦→x1 is not uniformly continuous.
(c)
The function >0 by x↦→√--
 x does turn out to be uniformly continuous (despite having unbounded derivatives!). Indeed, you can check that the assertion
          2     ||√ -- √ -||
|x − y| < 𝜀  =⇒     x−   y <  𝜀

holds for any x,y,𝜀 > 0.

The good news is that in the compact case all is well.

Theorem 30.1.3 (Uniform continuity free for compact spaces)
Let M be a compact metric space. Then any continuous map f : M N is also uniformly continuous.

Proof. Assume for contradiction there is some bad 𝜀 > 0. Then taking δ = 1∕n, we find that for each integer n there exists points pn and qn which are within 1∕n of each other, but are mapped more than 𝜀 away from each other by f. In symbols, dM(pn,qn) < 1∕n but dN(f(pn),f(qn)) > 1∕n.

By compactness of M, we can find a convergent subsequence pi1, pi2, … converging to some x M.. Since the qin is within 1∕in of pin, it ought to converge as well, to the same point x M. Then the sequences f(pin) and f(qin) should both converge to f(x) N, but this is impossible as they are always 𝜀 away from each other. □

This means for example that x2 viewed as a continuous function [0,1] is automatically uniformly continuous. Man, isn’t compactness great?

30.2  Dense sets and extension

Prototypical example for this section: Functions from N extend to N if they’re uniformly continuous and N is complete. See also counterexamples below.

Definition 30.2.1. Let S be a subset (or subspace) of a topological space X. Then we say that S is dense if every open subset of X contains a point of S.

Example 30.2.2 (Dense sets)

(a)
is dense in .
(b)
In general, any metric space M is dense in its completion M.

Dense sets lend themselves to having functions completed. The idea is that if I have a continuous function f : N, for some metric space N, then there should be at most one way to extend it to a function f: N. For we can approximate each rational number by real numbers: if I know f(1), f(1.4), f(1.41), … f(√ --
  2) had better be the limit of this sequence. So it is certainly unique.

However, there are two ways this could go wrong:

Example 30.2.3 (Non-existence of extension)

(a)
It could be that N is not complete, so the limit may not even exist in N. For example if N = , then certainly there is no way to extend even the identify function f : N to a function f: N.
(b)
Even if N was complete, we might run into issues where f explodes. For example, let N = and define
f(x) = ---1√--    f: ℚ → ℝ.
       x −  2

There is also no way to extend this due to the explosion of f near √ --
  2 ∕∈, which would cause f(√2-) to be undefined.

However, the way to fix this is to require f to be uniformly continuous, and in that case we do get a unique extension.

Theorem 30.2.4 (Extending uniformly continuous functions)
Let M be a metric space, N a complete metric space, and S a dense subspace of M. Suppose ψ: S N is a uniformly continuous function. Then there exists a unique continuous function ψ: M N such that the diagram

SVG-Viewer needed.

commutes.

Outline of proof. As mentioned in the discussion, each x M can be approximated by a sequence x1, x2, … in S with xi x. The two main hypotheses, completeness and uniform continuity, are now used:

Exercise 30.2.5. Prove that ψ(x1), ψ(x2),  converges in N by using uniform continuity to show that it is Cauchy, and then appealing to completeness of N.

Hence we define ψ(x) to be the limit of that sequence; this doesn’t depend on the choice of sequence, and one can use sequential continuity to show ψ is continuous. □

30.3  Defining the Riemann integral

Extensions will allow us to define the Riemann integral. I need to introduce a bit of notation so bear with me.

Definition 30.3.1. Let [a,b] be a closed interval.

Warning: only C0([a,b]) is common notation, and the other two are made up.

See picture below for a typical a rectangle function. (It is irritating that we have to officially assign a single value to each ti, even though there are naturally two values we want to use, and so we use the convention of letting the left endpoint be closed).

Definition 30.3.2. We can impose a metric on M([a,b]) by defining

d (f,g) =  sup |f(x)− g (x )|.
         x∈[a,b]

Now, there is a natural notion of integral for rectangle functions: just sum up the obvious rectangles! Officially, this is the expression

f (a)(t1 − a )+ f(t1)(t2 − t1)+ +f (t2)(t3 − t2)+ ⋅⋅⋅+ f(tn)(b− tn).

We denote this function by

Σ : R ([a,b]) → ℝ.

Theorem 30.3.3 (The Riemann integral)
There exists a unique continuous map

∫
 ab: M ([a,b]) → ℝ

such that the diagram

SVG-Viewer needed.

commutes.

Proof. We want to apply the extension theorem, so we just have to check a few things:

30.4  Meshes

The above definition might seem fantastical, overcomplicated, hilarious, or terrible, depending on your taste. But if you unravel it, it’s really the picture you are used to. What we have done is taking every continuous function f : [a,b] and showed that it can be approximated by a rectangle function (which we phrased as a dense inclusion). Then we added the area of the rectangles. Nonetheless, we will give a definition that’s more like what you’re used to seeing in other places.

Definition 30.4.1. A tagged partition P of [a,b] consists of a partition of [a,b] into n intervals, with a point ξi in the nth interval, denoted

a = t0 < t1 < t2 < ⋅⋅⋅ < tn = b   and    ξi ∈ [ti− 1,ti]  ∀ 1 ≤ i ≤ n.

The mesh of P is the width of the longest interval, i.e. maxi(ti ti1).

Of course the point of this definition is that we add the rectangles, but the ξi are the sample points.

Theorem 30.4.2 (Riemann integral)
Let f : [a,b] be continuous. Then

                          (                  )
∫ b                         ∑n
   f(x) dx = P taggledimpartition     f (ξi)(ti − ti−1)  .
 a              mesh P→0     i=1

Here the limit means that we can take any sequence of partitions whose mesh approaches zero.

Proof. The right-hand side corresponds to the areas of some rectangle functions g1, g2, …with increasingly narrow rectangles. As in the proof ?? , as the meshes of those rectangles approaches zero, by uniform continuity, we have d(f,gn) 0 as well. Thus by continuity in the diagram of ?? , we get limnΣ(gn) = (f) as needed. □

Combined with the mean value theorem, this can be used to give a short proof of the fundamental theorem of calculus for functions f with a continuous derivative. The idea is that for any choice of partition a t0 < t1 < t2 < ⋅⋅⋅ < tn b, using the Mean Value Theorem it should be possible to pick ξi in each interval to match with the slope of the secant: at which point the areas sum to the total change in f. We illustrate this situation with three points, and invite the reader to fill in the details as ?? .

One quick note is that although I’ve only defined the Riemann integral for continuous functions, there ought to be other functions for which it exists (including “piecewise continuous functions” for example, or functions “continuous almost everywhere”). The relevant definition is:

Definition 30.4.3. If f : [a,b] is a function which is not necessarily continuous, but for which the limit

              ( ∑n              )
      lim           f(ξi)(ti − ti− 1) .
P tagged partition i=1
   meshP→0

exists anyways, then we say f is Riemann integrable on [a,b] and define its value to be that limit abf(x) dx.

We won’t really use this definition much, because we will see that every Riemann integrable function is Lebesgue integrable, and the Lebesgue integral is better.

Example 30.4.4 (Your AP calculus returns)
We had better mention that ??  implies that we can compute Riemann integrals in practice, although most of you may already know this from high-school calculus For example, on the interval (1,4), the derivative of the function F(x) = 13x3 is F(x) = x2. As f(x) = x2 is a continuous function f : [1,4] , we get

∫
  4  2                   64-  1-
   x  dx = F (4)− F (1) = 3 − 3 =  21.
 1

Note that we could also have picked F(x) = 1
3x3 + 2019; the function F is unique up to shifting, and this constant cancels out when we subtract. This is why it’s common in high school to (really) abuse notation and write x2 dx = 1
3x3 + C.

30.5  A few harder problems to think about

Problem 30A. Let f : (a,b) be differentiable and assume fis bounded. Show that f is uniformly continuous.

Problem 30B (Fundamental theorem of calculus). Let f : [a,b] be continuous, differentiable on (a,b), and assume the derivative fextends to a continuous function f: [a,b] . Prove that

∫
  b ′
   f (x ) dx = f(b)− f (a).
 a

Problem 30C (Improper integrals). For each real number r > 0, evaluate the limit1

     ∫ 1 1
 lim+    -r-dx
𝜀→0   𝜀 x

or show it does not exist.

This can intuitively be thought of as the “improper” integral 01xr dx; it doesn’t make sense in our original definition since we did not (and cannot) define the integral over the non-compact (0,1] but we can still consider the integral over [𝜀,1] for any 𝜀 > 0.

Problem 30D. Show that

     (                        )
 lim    --1--+  -1---+ ⋅⋅⋅+ -1-  = log2.
n→ ∞   n+  1   n+ 2        2n

Part IX
Complex Analysis

31  Holomorphic functions

Throughout this chapter, we denote by U an open subset of the complex plane, and by Ω an open subset which is also simply connected. The main references for this chapter were [??].

31.1  The nicest functions on earth

In high school you were told how to differentiate and integrate real-valued functions. In this chapter on complex analysis, we’ll extend it to differentiation and integration of complex-valued functions.

Big deal, you say. Calculus was boring enough. Why do I care about complex calculus?

Perhaps it’s easiest to motivate things if I compare real analysis to complex analysis. In real analysis, your input lives inside the real line . This line is not terribly discerning – you can construct a lot of unfortunate functions. Here are some examples.

Example 31.1.1 (Optional: evil real functions)
You can skim over these very quickly: they’re only here to make a point.

(a)
The Devil’s Staircase (or Cantor function) is a continuous function H : [0,1] [0,1] which has derivative zero “almost everywhere”, yet H(0) = 0 and H(1) = 1.
(b)
The Weierstraß function
    ∑∞ (  )n
x ↦→      1-  cos(2015n πx)
    n=0  2

is continuous everywhere but differentiable nowhere.

(c)
The function
     {
      x100    x ≥ 0
x ↦→   − x100  x < 0

has the first 99 derivatives but not the 100th one.

(d)
If a function has all derivatives (we call these smooth functions), then it has a Taylor series. But for real functions that Taylor series might still be wrong. The function
    {
      e−1∕x   x > 0
x ↦→
      0      x ≤ 0

has derivatives at every point. But if you expand the Taylor series at x = 0, you get 0 + 0x + 0x2 + , which is wrong for any x > 0 (even x = 0.0001).

PIC

Figure 31.1:The Weierstraß Function (image from [?]).

Let’s even put aside the pathology. If I tell you the value of a real smooth function on the interval [1,1], that still doesn’t tell you anything about the function as a whole. It could be literally anything, because it’s somehow possible to “fuse together” smooth functions.

So what about complex functions? If you consider them as functions 2 2, you now have the interesting property that you can integrate along things that are not line segments: you can write integrals across curves in the plane. But has something more: it is a field, so you can multiply and divide two complex numbers.

So we restrict our attention to differentiable functions called holomorphic functions. It turns out that the multiplication on makes all the difference. The primary theme in what follows is that holomorphic functions are really, really nice, and that knowing tiny amounts of data about the function can determine all its values.

The two main highlights of this chapter, from which all other results are more or less corollaries:

Some of the resulting corollaries:

As [?] writes: “Complex analysis is the good twin and real analysis is the evil one: beautiful formulas and elegant theorems seem to blossom spontaneously in the complex domain, while toil and pathology rule the reals”.

31.2  Complex differentiation

Prototypical example for this section: Polynomials are holomorphic; z is not.

Let f : U be a complex function. Then for some z0 U, we define the derivative at z0 to be

    f (z  + h)− f (z )
 lim ---0----------0-.
h→0         h

Note that this limit may not exist; when it does we say f is differentiable at z0.

What do I mean by a “complex” limit h 0? It’s what you might expect: for every 𝜀 > 0 there should be a δ > 0 such that

                ||f (z0 + h)− f (z0)    ||
0 < |h| < δ =⇒   ||-------h------- − L|| < 𝜀.

If you like topology, you are encouraged to think of this in terms of open neighborhoods in the complex plane. (This is why we require U to be open: it makes it possible to take δ-neighborhoods in it.)

But note that having a complex derivative is actually much stronger than a real function having a derivative. In the real line, h can only approach zero from below and above, and for the limit to exist we need the “left limit” to equal the “right limit”. But the complex numbers form a plane: h can approach zero from many directions, and we need all the limits to be equal.

Example 31.2.1 (Important: conjugation is not holomorphic)
Let f(z) = z be complex conjugation, f : . This function, despite its simple nature, is not holomorphic! Indeed, at z = 0 we have,

f(h)−  f(0)   h-
-----------=  -.
     h        h

This does not have a limit as h 0, because depending on “which direction” we approach zero from we have different values.

If a function f : U is complex differentiable at all the points in its domain it is called holomorphic. In the special case of a holomorphic function with domain U = , we call the function entire.1

Example 31.2.2 (Examples of holomorphic functions)
In all the examples below, the derivative of the function is the same as in their real analogues (e.g. the derivative of ez is ez).

(a)
Any polynomial z↦→zn + cn1zn1 + ⋅⋅⋅ + c0 is holomorphic.
(b)
The complex exponential exp : x + yi↦→ex(cosy + isiny) can be shown to be holomorphic.
(c)
sin and cos are holomorphic when extended to the complex plane by cosz = eiz+e−iz
---2--- and sinz =  iz  −iz
e-−e2i---.
(d)
As usual, the sum, product, chain rules and so on apply, and hence sums, products, nonzero quotients, and compositions of holomorphic functions are also holomorphic.

You are welcome to try and prove these results, but I won’t bother to do so.

31.3  Contour integrals

Prototypical example for this section: γzm dz around the unit circle.

In the real line we knew how to integrate a function across a line segment [a,b]: essentially, we’d “follow along” the line segment adding up the values of f we see to get some area. Unlike in the real line, in the complex plane we have the power to integrate over arbitrary paths: for example, we might compute an integral around a unit circle. A contour integral lets us formalize this.

First of all, if f : and f(t) = u(t) + iv(t) for u,v , we can define an integral ab by just adding the real and imaginary parts:

∫ b         ( ∫ b      )    ( ∫ b      )
   f (t) dt =     u(t) dt + i     v(t) dt  .
 a             a               a

Now let α : [a,b] be a path, thought of as a complex differentiable2 function. Such a path is called a contour, and we define its contour integral by

∮           ∫
               b         ′
  αf(z) dz =  a f(α(t)) ⋅α (t) dt.

You can almost think of this as a u-substitution (which is where the αcomes from). In particular, it turns out this integral does not depend on how α is “parametrized”: a circle given by

                it
[0,2π] → ℂ : t ↦→ e

and another circle given by

[0,1] → ℂ : t ↦→ e2πit

and yet another circle given by

                  5
[0,1 ] → ℂ : t ↦→ e2πit

will all give the same contour integral, because the paths they represent have the same geometric description: “run around the unit circle once”.

In what follows I try to use α for general contours and γ in the special case of loops.

Let’s see an example of a contour integral.

Theorem 31.3.1
Take γ : [0,2π] to be the unit circle specified by

t ↦→ eit.

Then for any integer m, we have

∮         {
   m        2πi  m =  − 1
 γ z dz =   0    otherwise

Proof. The derivative of eit is ieit. So, by definition the answer is the value of

02π(eit)m (ieit) dt = 02πi(eit)1+m dt
= i 02π cos[(1 + m)t] + isin[(1 + m)t] dt
= 02π sin[(1 + m)t] dt + i 02π cos[(1 + m)t] dt.

This is now an elementary calculus question. One can see that this equals 2πi if m = 1 and otherwise the integrals vanish. □

Let me try to explain why this intuitively ought to be true for m = 0. In that case we have γ1 dz. So as the integral walks around the unit circle, it “sums up” all the tangent vectors at every point (that’s the direction it’s walking in), multiplied by 1. And given the nice symmetry of the circle, it should come as no surprise that everything cancels out. The theorem says that even if we multiply by zm for m 1, we get the same cancellation.

Definition 31.3.2. Given α : [0,1] , we denote by α the “backwards” contour α(t) = α(1 t).

Question 31.3.3. What’s the relation between αf dz and αf dz? Prove it.

This might seem a little boring. Things will get really cool really soon, I promise.

31.4  Cauchy-Goursat theorem

Prototypical example for this section: γzm dz = 0 for m 0. But if m < 0, Cauchy’s theorem does not apply.

Let Ω be simply connected (for example, Ω = ), and consider two paths α, β with the same start and end points.

What’s the relation between αf(z) dz and βf(z) dz? You might expect there to be some relation between them, considering that the space Ω is simply connected. But you probably wouldn’t expect there to be much of a relation.

As a concrete example, let Ψ : be the function z↦→z Re[z] (for example, Ψ(2015 + 3i) = 3i). Let’s consider two paths from 1 to 1. Thus β is walking along the real axis, and α which follows an upper semicircle.

Obviously βΨ(z) dz = 0. But heaven knows what αΨ(z) dz is supposed to equal. We can compute it now just out of non-laziness. If you like, you are welcome to compute it yourself (it’s a little annoying but not hard). If I myself didn’t mess up, it is

∮              ∮             ∫ π
   Ψ (z) dz = −   Ψ (z) dz = −   (isin(t))⋅ieit dt = 1πi
 α              α-            0                   2

which in particular is not zero.

But somehow Ψ is not a really natural function. It’s not respecting any of the nice, multiplicative structure of since it just rudely lops off the real part of its inputs. More precisely,

Question 31.4.1. Show that Ψ(z) = zRe[z] is not holomorphic. (Hint: z is not holomorphic.)

Now here’s a miracle: for holomorphic functions, the two integrals are always equal. Equivalently, (by considering α followed by β) contour integrals of loops are always zero. This is the celebrated Cauchy-Goursat theorem (also called the Cauchy integral theorem, but later we’ll have a “Cauchy Integral Formula” so blah).

Theorem 31.4.2 (Cauchy-Goursat theorem)
Let γ be a loop, and f : Ω a holomorphic function where Ω is open in and simply connected. Then

∮
  f (z ) dz = 0.
 γ

Remark 31.4.3 (Sanity check) This might look surprising considering that we saw γz1 dz = 2πi earlier. The subtlety is that z1 is not even defined at z = 0. On the other hand, the function ∖{0}→ by z↦→1
z is holomorphic! The defect now is that Ω = ∖{0} is not simply connected. So the theorem passes our sanity checks, albeit barely.

The typical proof of Cauchy’s Theorem assumes additionally that the partial derivatives of f are continuous and then applies the so-called Green’s theorem. But it was Goursat who successfully proved the fully general theorem we’ve stated above, which assumed only that f was holomorphic. I’ll only outline the proof, and very briefly. You can show that if f : Ω has an antiderivative F : Ω which is also holomorphic, and moreover Ω is simply connected, then you get a “fundamental theorem of calculus”, a la

∮

 α f(z) dz = F (α(b))− F (α (a ))

where α : [a,b] is some path. So to prove Cauchy-Goursat, you only have to show this antiderivative F exists. Goursat works very hard to prove the result in the special case that γ is a triangle, and hence by induction for any polygon. Once he has the result for a rectangle, he uses this special case to construct the function F explicitly. Goursat then shows that F is holomorphic, completing the proof.

Anyways, the theorem implies that γzm dz = 0 when m 0. So much for all our hard work earlier. But so far we’ve only played with circles. This theorem holds for any contour which is a loop. So what else can we do?

31.5  Cauchy’s integral theorem

We now present a stunning application of Cauchy-Goursat, a “representation theorem”: essentially, it says that values of f inside a disk are determined by just the values on the boundary! In fact, we even write down the exact formula. As [?] says, “any time a certain type of function satisfies some sort of representation theorem, it is likely that many more deep theorems will follow.” Let’s pull back the curtain:

Theorem 31.5.1 (Cauchy’s integral formula)
Let γ : [0,2π] be a circle in the plane given by t↦→Reit, which bounds a disk D. Suppose f : U is holomorphic such that U contains the circle and its interior. Then for any point a in the interior of D, we have

          ∮
f(a) = -1--  -f(z) dz.
       2πi  γz − a

Note that we don’t require U to be simply connected, but the reason is pretty silly: we’re only going to ever integrate f over D, which is an open disk, and hence the disk is simply connected anyways.

The presence of 2πi, which you saw earlier in the form circlez1 dz, is no accident. In fact, that’s the central result we’re going to use to prove the result.

Proof. There are several proofs out there, but I want to give the one that really draws out the power of Cauchy’s theorem. Here’s the picture we have: there’s a point a sitting inside a circle γ, and we want to get our hands on the value f(a).

We’re going to do a trick: construct a keyhole contour Γδ,𝜀 which has an outer circle γ, plus an inner circle γ𝜀, which is a circle centered at a with radius 𝜀, running clockwise (so that γ𝜀 runs counterclockwise). The “width” of the corridor is δ. See picture:

Hence Γδ,𝜀 consists of four smooth curves.

Question 31.5.2. Draw a simply connected open set Ω which contains the entire Γδ,𝜀 but does not contain the point a.

Hence, the function f(z)
z−a- manages to be holomorphic on all of Ω. Thus Cauchy’s theorem applies and tells us that

    ∮
0 =      f(z)-dz.
     Γ δ,𝜀 z − a

As we let δ 0, the two walls of the keyhole will cancel each other (because f is continuous, and the walls run in opposite directions). So taking the limit as δ 0, we are left with just γ and γ𝜀, which (taking again orientation into account) gives

∮  f (z)       ∮   f (z)      ∮   f(z)
   ----- dz = −  -------dz =     -----dz.
 γ z − a         γ𝜀 z − a       γ𝜀 z − a

Thus we’ve managed to replace γ with a much smaller circle γ𝜀 centered around a, and the rest is algebra.

To compute the last quantity, write

γ𝜀f-(z-)
z − a dz = γ𝜀f(z)−-f-(a-)
   z − a dz + f(a) γ𝜀--1--
z − a dz
= γ𝜀f(z)− f (a )
---z −-a--- dz + 2πif(a).

where we’ve used ??  Thus, all we have to do is show that

∮
   f-(z)−-f(a)
 γ𝜀   z − a    dz = 0.

For this we can basically use the weakest bound possible, the so-called ML lemma which I’ll cite without proof: it says “bound the function everywhere by its maximum”.

Lemma 31.5.3 (ML estimation lemma)
Let f be a holomorphic function and α a path. Suppose M = maxz on α|f(z)|, and let L be the length of α. Then

|∮         |
||  f(z) dz||≤ M  L.
| α       |

(This is straightforward to prove if you know the definition of length: L = ab|α(t)| dt, where α : [a,b] .)

Anyways, as 𝜀 0, the quantity f(z)z−−fa(a) approaches f(a), and so for small enough 𝜀 (i.e. z close to a) there’s some upper bound M. Yet the length of γ𝜀 is the circumference 2π𝜀. So the ML lemma says that

|∮             |
||   f(z)−-f-(a)||
| γ    z − a   | ≤ 2π𝜀 ⋅M → 0
   𝜀

as desired. □

31.6  Holomorphic functions are analytic

Prototypical example for this section: Imagine a formal series kckxk!

In the setup of the previous problem, we have a circle γ : [0,2π] and a holomorphic function f : U which contains the disk D. We can write

f(a) =  1
2πi- γf(z)
z −-a dz
= -1--
2πi γf(z)∕z--
1 − az dz
=  1
2πi- γf(z)∕z k0(a)
 z-k dz
You can prove (using the so-called Weierstrass M-test) that the summation order can be switched:

f(a) = -1--
2πi k0 γf-(z)
  z (  )
  a-
  zk dz
=  1
2πi- k0 γak f(z)
zk+1- dz
= k0(    ∮         )
  -1--  f(z)-dz
  2πi γ zk+1ak.
Letting ck = 12πi- γ f(z)
zk+1 dz, and noting this is independent of a, this is
f(a) = k0ckak

and that’s the miracle: holomorphic functions are given by a Taylor series! This is one of the biggest results in complex analysis. Moreover, if one is willing to believe that we can take the derivative k times, we obtain

     f(k)(0)
ck = -------
       k!

and this gives us f(k)(0) = k! ck.

Naturally, we can do this with any circle (not just one centered at zero). So let’s state the full result below, with arbitrary center p.

Theorem 31.6.1 (Cauchy’s differentiation formula)
Let f : U be a holomorphic function and let D be a disk centered at point p bounded by a circle γ. Suppose D is contained inside U. Then f is given everywhere in D by a Taylor series

                              2
f(z) = c0 + c1(z − p)+ c2(z − p) + ...

where

     fk(p)    1  ∮   f(w − p)
ck = ----- = ----   ------k+1-dw
       k!    2πi  γ (w − p)

In particular,

                k! ∮   f(w − p)
f(k)(p) = k!ck = ----   ------k+1-dw.
               2πi  γ (w − p)

Most importantly,

Over any disk, a holomorphic function is given exactly by a Taylor series.

This establishes a result we stated at the beginning of the chapter: that a function being complex differentiable once means it is not only infinitely differentiable, but in fact equal to its Taylor series.

I should maybe emphasize a small subtlety of the result: the Taylor series centered at p is only valid in a disk centered at p which lies entirely in the domain U. If U = this is no issue, since you can make the disk big enough to accommodate any point you want. It’s more subtle in the case that U is, for example, a square; you can’t cover the entire square with a disk centered at some point without going outside the square. However, since U is open we can at any rate at least find some open neighborhood for which the Taylor series is correct – in stark contrast to the real case. Indeed, as you’ll see in the problems, the existence of a Taylor series is incredibly powerful.

31.7  A few harder problems to think about

These aren’t olympiad problems, but I think they’re especially nice! In the next complex analysis chapter we’ll see some more nice applications.

The first few results are the most important.

Problem 31A (Liouville’s theorem).       PICLet f : be an entire function. Suppose that |f(z)| < 1000 for all complex numbers z. Prove that f is a constant function.

Problem 31B (Zeros are isolated). An isolated set in the complex plane is a set of points S such that around each point in S, one can draw an open neighborhood not intersecting any other point of S.

Show that the zero set of any nonzero holomorphic function f : U is an isolated set, unless there exists a nonempty open subset of U on which f is identically zero.

Problem 31C (Identity theorem).       PICLet f,g : U be holomorphic, and assume that U is connected. Prove that if f and g agree on some open neighborhood, then f = g.

Problem 31D (Maximums Occur On Boundaries). Let f : U be holomorphic, let Y U be compact, and let ∂Y be boundary3 of Y . Show that

mazx∈Y |f(z)| = zm∈a∂xY |f(z)|.

In other words, the maximum values of |f| occur on the boundary. (Such maximums exist by compactness.)

Problem 31E (Harvard quals). Let f : be a nonconstant entire function. Prove that fimg() is dense in . (In fact, a much stronger result is true: Little Picard’s theorem says that the image of a nonconstant entire function omits at most one point.)

32  Meromorphic functions

32.1  The second nicest functions on earth

If holomorphic functions are like polynomials, then meromorphic functions are like rational functions. Basically, a meromorphic function is a function of the form AB(z(z)) where A,B : U are holomorphic and B is not zero. The most important example of a meromorphic function is 1
z.

We are going to see that meromorphic functions behave like “almost-holomorphic” functions. Specifically, a meromorphic function A∕B will be holomorphic at all points except the zeros of B (called poles). By the identity theorem, there cannot be too many zeros of B! So meromorphic functions can be thought of as “almost holomorphic” (like 1
z, which is holomorphic everywhere but the origin). We saw that

   ∮
-1--  1-dz = 1
2πi γ z

for γ(t) = eit the unit circle. We will extend our results on contours to such situations.

It turns out that, instead of just getting γf(z) dz = 0 like we did in the holomorphic case, the contour integrals will actually be used to count the number of poles inside the loop γ. It’s ridiculous, I know.

32.2  Meromorphic functions

Prototypical example for this section: 1
z, with a pole of order 1 and residue 1 at z = 0.

Let U be an open subset of again.

Definition 32.2.1. A function f : U is meromorphic if there exists holomorphic functions A,B: U with B not identically zero in any open neighborhood, and f(z) = A(z)∕B(z) whenever B(z)0.

Let’s see how this function f behaves. If z U has B(z)0, then in some small open neighborhood the function B isn’t zero at all, and thus A∕B is in fact holomorphic; thus f is holomorphic at z. (Concrete example: 1z is holomorphic in any disk not containing 0.)

On the other hand, suppose p U has B(p) = 0: without loss of generality, p = 0 to ease notation. By using the Taylor series at p = 0 we can put

B (z) = ckzk + ck+1zk+1 + ...

with ck0 (certainly some coefficient is nonzero since B is not identically zero!). Then we can write

--1--   1-- -------1-------
B (z) = zk ⋅ck + ck+1z + ....

But the fraction on the right is a holomorphic function in this open neighborhood! So all that’s happened is that we have an extra zk kicking around.

This gives us an equivalent way of viewing meromorphic functions:

Definition 32.2.2. Let f : U as usual. A meromorphic function is a function which is holomorphic on U except at an isolated set S of points (meaning it is holomorphic as a function U S ). For each p S, called a pole of f, the function f must admit a Laurent series, meaning that

f (z ) =--c−-m-- + --c−m+1--- + ⋅⋅⋅+ -c−1-+ c0 + c1(z − p)+ ...
       (z − p)m   (z − p)m −1        z − p

for all z in some open neighborhood of p, other than z = p. Here m is a positive integer (and cm0).

Note that the trailing end must terminate. By “isolated set”, I mean that we can draw open neighborhoods around each pole in S, in such a way that no two open neighborhoods intersect.

Example 32.2.3 (Example of a meromorphic function)
Consider the function

z-+-1.
sinz

It is meromorphic, because it is holomorphic everywhere except at the zeros of sinz. At each of these points we can put a Laurent series: for example at z = 0 we have

z-+-1
 sin z = (z + 1) -----3-1-5------
z − z3! + z5! − ...
= 1-
z ---------z-+-1---------
    (z2   z4   z6     )
1 −   3! − 5! + 7! − ...
= 1
--
z (z + 1) k0(z2   z4   z6      )
 -- − -- + -- − ...
 3!   5!   7!k.

If we expand out the horrible sum (which I won’t do), then you get 1
z times a perfectly fine Taylor series, i.e. a Laurent series.

Abuse of Notation 32.2.4. We’ll often say something like “consider the function f : by z↦→1
z”. Of course this isn’t completely correct, because f doesn’t have a value at z = 0. If I was going to be completely rigorous I would just set f(0) = 2015 or something and move on with life, but for all intents let’s just think of it as “undefined at z = 0”.

Why don’t I just write g : ∖{0}→ ? The reason I have to do this is that it’s still important for f to remember it’s “trying” to be holomorphic on , even if isn’t assigned a value at z = 0. As a function ∖{0}→ the function 1
z is actually holomorphic.

Remark 32.2.5 — I have shown that any function A(z)∕B(z) has this characterization with poles, but an important result is that the converse is true too: if f : U S is holomorphic for some isolated set S, and moreover f admits a Laurent series at each point in S, then f can be written as a rational quotient of holomorphic functions. I won’t prove this here, but it is good to be aware of.

Definition 32.2.6. Let p be a pole of a meromorphic function f, with Laurent series

         c−m        c−m+1           c− 1
f(z) = ------m-+  ------m−-1 + ⋅⋅⋅+ -----+ c0 + c1(z − p)+ ....
       (z − p)    (z − p)           z − p

The integer m is called the order of the pole. A pole of order 1 is called a simple pole.

We also give the coefficient c1 a name, the residue of f at p, which we write Res(f;p).

The order of a pole tells you how “bad” the pole is. The order of a pole is the “opposite” concept of the multiplicity of a zero. If f has a pole at zero, then its Taylor series near z = 0 might look something like

       1    8    2    4
f(z) = -5 + -3 − -2 + -+ 9 − 3z + 8z2 + ...
       z    z    z    z

and so f has a pole of order five. By analogy, if g has a zero at z = 0, it might look something like

        3     4     5
g(z) = 3z + 2z + 9z  + ...

and so g has a zero of multiplicity three. These orders are additive: f(z)g(z) still has a pole of order 5 3 = 2, but f(z)g(z)2 is completely patched now, and in fact has a simple zero now (that is, a zero of degree 1).

Exercise 32.2.7. Convince yourself that orders are additive as described above. (This is obvious once you understand that you are multiplying Taylor/Laurent series.)

Metaphorically, poles can be thought of as “negative zeros”.

We can now give many more examples.

Example 32.2.8 (Examples of meromorphic functions)

(a)
Any holomorphic function is a meromorphic function which happens to have no poles. Stupid, yes.
(b)
The function by z↦→100z1 for z0 but undefined at zero is a meromorphic function. Its only pole is at zero, which has order 1 and residue 100.
(c)
The function by z↦→z3 + z2 + z9 is also a meromorphic function. Its only pole is at zero, and it has order 3, and residue 0.
(d)
The function by z↦→ez2
z is meromorphic, with the Laurent series at z = 0 given by
 z                      2     3
e- =  1-+  1+  1+  z+  z--+ z---+ ....
z2    z2   z   2   6   24   120

Hence the pole z = 0 has order 2 and residue 1.

Example 32.2.9 (A rational meromorphic function)
Consider the function given by

z↦→z4 + 1
-2----
z  − 1 = z2 + 1 +       2
-------------
(z − 1)(z + 1)
= z2 + 1 + --1--
z − 1 ---1---
1 + z−21
=   1
z-−-1 + 3
2- + 9
4-(z 1) + 7
8-(z 1)2

It has a pole of order 1 and residue 1 at z = 1. (It also has a pole of order 1 at z = 1; you are invited to compute the residue.)

Example 32.2.10 (Function with infinitely many poles)
The function by

z ↦→ --1---
    sin(z)

has infinitely many poles: the numbers z = 2πk, where k is an integer. Let’s compute the Laurent series at just z = 0:

----1---
sin(2πz) = ------3-1-5-----
1z! − z3! + z5! − ...
= 1
z-          1
----(z2---z4-----)-
1 −  -3! − 5! + ...
= 1
--
z k0(z2   z4      )
 -- − -- + ...
  3!   5!k.

which is a Laurent series, though I have no clue what the coefficients are. You can at least see the residue; the constant term of that huge sum is 1, so the residue is 1. Also, the pole has order 1.

The Laurent series, if it exists, is unique (as you might have guessed), and by our result on holomorphic functions it is actually valid for any disk centered at p (minus the point p). The part c−1
z−p + ⋅⋅⋅ + --c−m--
(z−p)m is called the principal part, and the rest of the series c0 + c1(z p) + is called the analytic part.

32.3  Winding numbers and the residue theorem

Recall that for a counterclockwise circle γ and a point p inside it, we had

∮              {
  (z − p)m dz =  0    m ⁄=  − 1
 γ               2πi  m =  − 1

where m is an integer. One can extend this result to in fact show that γ(z p)m dz = 0 for any loop γ, where m 1. So we associate a special name for the nonzero value at m = 1.

Definition 32.3.1. For a point p and a loop γ not passing through it, we define the winding number, denoted I(p,γ), by

          1  ∮   1
I(γ,p) = ----  ----- dz
         2πi  γz − p

For example, by our previous results we see that if γ is a circle, we have

            {
I(circle,p) =  1  p inside the circle
              0  p outside the circle.

If you’ve read the chapter on fundamental groups, then this is just the fundamental group associated to ∖{p}. In particular, the winding number is always an integer (the proof of this requires the complex logarithm, so we omit it here). In the simplest case the winding numbers are either 0 or 1.

Definition 32.3.2. We say a loop γ is regular if I(p,γ) = 1 for all points p in the interior of γ (for example, if γ is a counterclockwise circle).

With all these ingredients we get a stunning generalization of the Cauchy-Goursat theorem:

Theorem 32.3.3 (Cauchy’s residue theorem)
Let f : Ω be meromorphic, where Ω is simply connected. Then for any loop γ not passing through any of its poles, we have

    ∮
-1--  f (z ) dz = ∑  I(γ,p) Res(f;p).
2πi  γ
                pole p

In particular, if γ is regular then the contour integral is the sum of all the residues, in the form

 1 ∮             ∑
----  f(z) dz =      Res(f;p).
2πi γ           pole p
               inside γ

Question 32.3.4. Verify that this result coincides with what you expect when you integrate γcz1 dz for γ a counter-clockwise circle.

The proof from here is not really too impressive – the “work” was already done in our statements about the winding number.

Proof. Let the poles with nonzero winding number be p1,,pk (the others do not affect the sum).1 Then we can write f in the form

                   (      )
             ∑k     --1---
f(z) = g(z )+    Pi  z − pi
             i=1

where Pi( -1-)
  z− pi is the principal part of the pole pi. (For example, if f(z) = z3−z+1
z(z+1) we would write f(z) = (z 1) + 1z 11+z.)

The point of doing so is that the function g is holomorphic (we’ve removed all the “bad” parts), so

∮

 γ g(z) dz = 0

by Cauchy-Goursat.

On the other hand, if Pi(x) = c1x + c2x2 + ⋅⋅⋅ + cdxd then

γPi(   1  )
 z-−-p-
      i dz = γc1 (   1   )
  z-−-p-
       i dz + γc2 (   1  )
  z −-p-
      i2 dz +
= c1 I(γ,pi) + 0 + 0 +
= I(γ,pi)Res(f;pi).

which gives the conclusion. □

32.4  Argument principle

One tricky application is as follows. Given a polynomial P(x) = (xa1)e1(xa2)e2(xan)en, you might know that we have

P-′(x)   --e1--   --e2--        --en--
P (x) = x − a1 + x − a2 + ⋅⋅⋅+  x−  an.

The quantity P∕P is called the logarithmic derivative, as it is the derivative of log P. This trick allows us to convert zeros of P into poles of P∕P with order 1; moreover the residues of these poles are the multiplicities of the roots.

In an analogous fashion, we can obtain a similar result for any meromorphic function f.

Proposition 32.4.1 (The logarithmic derivative)
Let f : U be a meromorphic function. Then the logarithmic derivative f∕f is meromorphic as a function from U to ; its zeros and poles are:

(i)
A pole at each zero of f whose residue is the multiplicity, and
(ii)
A pole at each pole of f whose residue is the negative of the pole’s order.

Again, you can almost think of a pole as a zero of negative multiplicity. This spirit is exemplified below.

Proof. Dead easy with Taylor series. Let a be a zero/pole of f, and WLOG set a = 0 for convenience. We take the Taylor series at zero to get

f(z) = ckzk + ck+1zk+1 + ...

where k < 0 if 0 is a pole and k > 0 if 0 is a zero. Taking the derivative gives

f′(z) = kckzk− 1 + (k + 1)ck+1zk + ....

Now look at f∕f; with some computation, it equals

f′(z)-  -1kck-+-(k-+-1)ck+1z +-...
f(z) = z     ck + ck+1z + ...  .

So we get a simple pole at z = 0, with residue k. □

Using this trick you can determine the number of zeros and poles inside a regular closed curve, using the so-called Argument Principle.

Theorem 32.4.2 (Argument principle)
Let γ be a regular curve. Suppose f : U is meromorphic inside and on γ, and none of its zeros or poles lie on γ. Then

 1  ∮ f ′
----  -- dz = Z − P
2πi  γ f

where Z is the number of zeros inside γ (counted with multiplicity) and P is the number of poles inside γ (again with multiplicity).

Proof. Immediate by applying Cauchy’s residue theorem alongside the preceding proposition. In fact you can generalize to any curve γ via the winding number: the integral is

    ∮   ′      ∑            ∑
-1--   f-dz =      I(γ, z)−      I(γ,p)
2πi  γ f      zero z        pole p

where the sums are with multiplicity. □

Thus the Argument Principle allows one to count zeros and poles inside any region of choice.

Computers can use this to get information on functions whose values can be computed but whose behavior as a whole is hard to understand. Suppose you have a holomorphic function f, and you want to understand where its zeros are. Then just start picking various circles γ. Even with machine rounding error, the integral will be close enough to the true integer value that we can decide how many zeros are in any given circle. Numerical evidence for the Riemann Hypothesis (concerning the zeros of the Riemann zeta function) can be obtained in this way.

32.5  Philosophy: why are holomorphic functions so nice?

All the fun we’ve had with holomorphic and meromorphic functions comes down to the fact that complex differentiability is such a strong requirement. It’s a small miracle that , which a priori looks only like 2, is in fact a field. Moreover, 2 has the nice property that one can draw nontrivial loops (it’s also true for real functions that aaf dx = 0, but this is not so interesting!), and this makes the theory much more interesting.

As another piece of intuition from Siu2 : If you try to get (left) differentiable functions over quaternions, you find yourself with just linear functions.

32.6  A few harder problems to think about

Problem 32A (Fundamental theorem of algebra). Prove that if f is a nonzero polynomial of degree n then it has n roots.

Problem 32B (Rouché’s theorem). Let f,g: U be holomorphic functions, where U contains the unit disk. Suppose that |f(z)| > |g(z)| for all z on the unit circle. Prove that f and f + g have the same number of zeros which lie strictly inside the unit circle (counting multiplicities).

Problem 32C (Wedge contour).       PICFor each odd integer n, evaluate the improper integral

∫ ∞   1
    -----n dx.
 0  1 + x

Problem 32D (Another contour).     PICPICProve that the integral

∫ ∞
     cosx--dx
 −∞  x2 + 1

converges and determine its value.

Problem 32E.       PICLet f : U be a nonconstant holomorphic function.

(a)
(Open mapping theorem) Prove that fimg(U) is open in .3
(b)
(Maximum modulus principle) Show that |f| cannot have a maximum over U. That is, show that for any z U, there is some z′∈ U such that |f (z)| <     ′
|f(z)|.

33  Holomorphic square roots and logarithms

In this chapter we’ll make sense of a holomorphic square root and logarithm. The main results are ?? , ?? , ?? , and ?? . If you like, you can read just these four results, and skip the discussion of how they came to be.

Let f : U be a holomorphic function. A holomorphic nth root of f is a function g : U such that f(z) = g(z)n for all z U. A logarithm of f is a function g : U such that f(z) = eg(z) for all z U. The main question we’ll try to figure out is: when do these exist? In particular, what if f = id?

33.1  Motivation: square root of a complex number

To start us off, can we define √ --
  z for any complex number z?

The first obvious problem that comes up is that for any z, there are two numbers w such that w2 = z. How can we pick one to use? For our ordinary square root function, we had a notion of “positive”, and so we simply took the positive root.

Let’s expand on this: given z = r(cos𝜃 + isin𝜃) (here r 0) we should take the root to be

    √ -
w =   r(cosα + isin α).

such that 2α 𝜃 (mod 2π); there are two choices for α (mod 2π), differing by π.

For complex numbers, we don’t have an obvious way to pick α. Nonetheless, perhaps we can also get away with an arbitrary distinction: let’s see what happens if we just choose the α with 1
2π < α 1
2π.

Pictured below are some points (in red) and their images (in blue) under this “upper-half” square root. The condition on α means we are forcing the blue points to lie on the right-half plane.

Here, wi2 = zi for each i, and we are constraining the wi to lie in the right half of the complex plane. We see there is an obvious issue: there is a big discontinuity near the points w5 and w7! The nearby point w6 has been mapped very far away. This discontinuity occurs since the points on the negative real axis are at the “boundary”. For example, given 4, we send it to 2i, but we have hit the boundary: in our interval 12π α < 12π, we are at the very left edge.

The negative real axis that we must not touch is what we will later call a branch cut, but for now I call it a ray of death. It is a warning to the red points: if you cross this line, you will die! However, if we move the red circle just a little upwards (so that it misses the negative real axis) this issue is avoided entirely, and we get what seems to be a “nice” square root.

In fact, the ray of death is fairly arbitrary: it is the set of “boundary issues” that arose when we picked 1
2π < α 1
2π. Suppose we instead insisted on the interval 0 α < π; then the ray of death would be the positive real axis instead. The earlier circle we had now works just fine.

What we see is that picking a particular α-interval leads to a different set of edge cases, and hence a different ray of death. The only thing these rays have in common is their starting point of zero. In other words, given a red circle and a restriction of α, I can make a nice “square rooted” blue circle as long as the ray of death misses it.

So, what exactly is going on?

33.2  Square roots of holomorphic functions

To get a picture of what’s happening, we would like to consider a more general problem: let f : U be holomorphic. Then we want to decide whether there is a g : U such that

          2
f(z) = g(z) .

Our previous discussion with f = id tells us we cannot hope to achieve this for U = ; there is a “half-ray” which causes problems. However, there are certainly functions f : such that a g exists. As a simplest example, f(z) = z2 should definitely have a square root!

Now let’s see if we can fudge together a square root. Earlier, what we did was try to specify a rule to force one of the two choices at each point. This is unnecessarily strict. Perhaps we can do something like: start at a point in z0 U, pick a square root w0 of f(z0), and then try to “fudge” from there the square roots of the other points. What do I mean by fudge? Well, suppose z1 is a point very close to z0, and we want to pick a square root w1 of f(z1). While there are two choices, we also would expect w0 to be close to w1. Unless we are highly unlucky, this should tell us which choice of w1 to pick. (Stupid concrete example: if I have taken the square root 4.12i of 17 and then ask you to continue this square root to 16, which sign should you pick for ±4i?)

There are two possible ways we could get unlucky in the scheme above: first, if w0 = 0, then we’re sunk. But even if we avoid that, we have to worry that if we run a full loop in the complex plane, we might end up in a different place from where we started. For concreteness, consider the following situation, again with f = id:

We started at the point z0, with one of its square roots as w0. We then wound a full red circle around the origin, only to find that at the end of it, the blue arc is at a different place where it started!

The interval construction from earlier doesn’t work either: no matter how we pick the interval for α, any ray of death must hit our red circle. The problem somehow lies with the fact that we have enclosed the very special point 0.

Nevertheless, we know that if we take f(z) = z2, then we don’t run into any problems with our “make it up as you go” procedure. So, what exactly is going on?

33.3  Covering projections

By now, if you have read the part on algebraic topology, this should all seem quite familiar. The “fudging” procedure exactly describes the idea of a lifting.

More precisely, recall that there is a covering projection

(− )2 : ℂ ∖ {0} → ℂ ∖{0 }.

Let V = {z ∈ U | f(z) ⁄= 0}. For z U V , we already have the square root g(z) = ∘ ----
  f(z) =  --
√0 = 0. So the burden is completing g : V .

Then essentially, what we are trying to do is construct a lifting g in the diagram               E = ℂ ∖ {0}
                   |
                   |
                   |p=(−)2
                   |
           g
V -----f------B-=-ℂ-∖ {0}.

Our map p can be described as “winding around twice”. Our ??  now tells us that this lifting exists if and only if

f img(π (V )) ⊆ pimg(π (E))
 ∗    1        ∗    1

is a subset of the image of π1(E) by p. Since B and E are both punctured planes, we can identify them with S1.

Question 33.3.1. Show that the image under p is exactly 2once we identify π1(B) = .

That means that for any loop γ in V , we need f γ to have an even winding number around 0 B. This amounts to

   ∮
-1-   f′dz ∈ 2ℤ
2π  γ f

since f has no poles.

Replacing 2 with n and carrying over the discussion gives the first main result.

Theorem 33.3.2 (Existence of holomorphic nth roots)
Let f : U be holomorphic. Then f has a holomorphic nth root if and only if

 1 ∮  f′
2πi-  f- dz ∈ n ℤ
    γ

for every contour γ in U.

33.4  Complex logarithms

The multivalued nature of the complex logarithm comes from the fact that

exp (z + 2πi) = exp (z).

So if ew = z, then any complex number w + 2πik is also a solution.

We can handle this in the same way as before: it amounts to a lifting of the following diagram.                  E = ℂ
                   |
                   |
                   |p=exp
                   |
U ----------g-B-=-ℂ-∖ {0}
       f
There is no longer a need to work with a separate V since:

Question 33.4.1. Show that if f has any zeros then g can’t possibly exist.

In fact, the map exp : ∖{0} is a universal cover, since is simply connected. Thus, pimg(π1()) is trivial. So in addition to being zero-free, f cannot have any winding number around 0 B at all. In other words:

Theorem 33.4.2 (Existence of logarithms)
Let f : U be holomorphic. Then f has a logarithm if and only if

 1  ∮  f′
----   --dz = 0
2πi  γ f

for every contour γ in U.

33.5  Some special cases

The most common special case is

Corollary 33.5.1 (Nonvanishing functions from simply connected domains)
Let f : Ω be continuous, where Ω is simply connected. If f(z)0 for every z Ω, then f has both a logarithm and holomorphic nth root.

Finally, let’s return to the question of f = id from the very beginning. What’s the best domain U such that

  --
√ − : U → ℂ

is well-defined? Clearly U = cannot be made to work, but we can do almost as well. For note that the only zero of f = id is at the origin. Thus if we want to make a logarithm exist, all we have to do is make an incision in the complex plane that renders it impossible to make a loop around the origin. The usual choice is to delete negative half of the real axis, our very first ray of death; we call this a branch cut, with branch point at 0 (the point which we cannot circle around). This gives

Theorem 33.5.2 (Branch cut functions)
There exist holomorphic functions

log : (−∞,0]
√--
n− : (−∞,0]

satisfying the obvious properties.

There are many possible choices of such functions (n choices for the nth root and infinitely many for log); a choice of such a function is called a branch. So this is what is meant by a “branch” of a logarithm.

The principal branch is the “canonical” branch, analogous to the way we arbitrarily pick the positive branch to define √ --
  − : 0 0. For log, we take the w such that ew = z and the imaginary part of w lies in (π,π] (since we can shift by integer multiples of 2πi). Often, authors will write Log z to emphasize this choice.

33.6  A few harder problems to think about

Problem 33A. Show that a holomorphic function f : U has a holomorphic logarithm if and only if it has a holomorphic nth root for every integer n.

Problem 33B. Show that the function f : U by z↦→z(z 1) has a holomorphic square root, where U is the entire complex plane minus the closed interval [0,1].

Part X
Measure Theory

34  Measure spaces

Here is an outline of where we are going next. Our goal over the next few chapters is to develop the machinery to state (and in some cases prove) the law of large numbers and the central limit theorem. For these purposes, the scant amount of work we did in Calculus 101 is going to be awfully insufficient: integration over (or even n) is just not going to cut it.

This chapter will develop the theory of “measure spaces”, which you can think of as “spaces equipped with a notion of size”. We will then be able to integrate over these with the so-called Lebesgue integral (which in some senses is almost strictly better than the Riemann one).

Letter connotations

There are a lot of “types” of objects moving forward, so here are the letter connotations we’ll use throughout the next several chapters. This makes it easier to tell what the “type” of each object is just by which letter is used.

34.1  Motivating measure spaces via random variables

To motivate why we want to construct measure spaces, I want to talk about a (real) random variable, which you might think of as

Why does this need a long theory to develop well? For a simple coin flip one intuitively just thinks “50% heads, 50% tails” and is done with it. The situation is a little trickier with temperature since it is continuous rather than discrete, but if all you care about is that one temperature, calculus seems like it might be enough to deal with this.

But it gets more slippery once the variables start to “talk to” each other: the high temperature tells you a little bit about whether it will rain, because e.g. if the temperature is very high it’s quite likely to be sunny. Suddenly we find ourselves wishing we could talk about conditional probability, but this is a whole can of worms — the relations between these sorts of things can get very complicated very quickly.

The big idea to getting a formalism for this is that:

Our measure spaces Ω will be thought of as a space of entire worlds, with each ω Ω representing a world. Random variables are functions from worlds to .

This way, the space of “worlds” takes care of all the messy interdependence.

Then, we can assign “measures” to sets of worlds: for example, to be a fair coin means that if you are only interested in that one coin flip, the “fraction” of worlds in which that coin showed heads should be 1
2. This is in some ways backwards from what you were told in high-school: officially, we start with the space of worlds, rather than starting with the probabilities.

It will soon be clear that there is no way we can assign a well-defined measure to every single one of the 2Ω subsets. Fortunately, in practice, we won’t need to, and the notion of a σ-algebra will capture the idea of “enough measur-able sets for us to get by”.

Remark 34.1.1 (Random seeds) Another analogy if you do some programming: each ω Ω is a random seed, and everything is determined from there.

34.2  Motivating measure spaces geometrically

So, we have a set Ω of possible points (which in the context of the previous discussion can be thought of as the set of worlds), and we want to assign a measure (think volume) to subsets of points in Ω. We will now describe some of the obstacles that we will face, in order to motivate how measure spaces are defined (as the previous section only motivated why we want such things).

If you try to do this naïvely, you basically immediately run into set-theoretic issues. A good example to think about why this might happen is if Ω = 2 with the measure corresponding to area. You can define the area of a triangle as in high school, and you can then try and define the area of a circle, maybe by approximating it with polygons. But what area would you assign to the subset 2, for example? (It turns out “zero” is actually a working answer.) Or, a unit disk is composed of infinitely many points; each of the points better have measure zero, but why does their union have measure π then? Blah blah blah.

We’ll say more about this later, but you might have already heard of the Banach-Tarski paradox which essentially shows there is no good way that you can assign a measure to every single subset of 3 and still satisfy basic sanity checks. There are just too many possible subsets of Euclidean space.

However, the good news is that most of these sets are not ones that we will ever care about, and it’s enough to define measures for certain “sufficiently nice sets”. The adjective we will use is measurable, and it will turn out that this will be way, way more than good enough for any practical purposes.

We will generally use A, B, …for measurable sets and denote the entire family of measurable sets by curly 𝒜 .

34.3  σ-algebras and measurable spaces

Here’s the machine code.

Definition 34.3.1. A measurable space consists of a space Ω of points, and a σ-algebra 𝒜 of subsets of Ω (the “measurable sets” of Ω). The set 𝒜 is required to satisfy the following axioms:

(Complaint: this terminology is phonetically confusing, because it can be confused with “measure space” later. The way to think about is that “measurable spaces have a σ-algebra, so we could try to put a measure on it, but we haven’t, yet.”)

Though this definition is how we actually think about it in a few select cases, for the most part, and we will usually instantiate 𝒜 in practice in a different way:

Definition 34.3.2. Let Ω be a set, and consider some family of subsets of Ω. Then the σ-algebra generated by is the smallest σ-algebra 𝒜 which contains .

As is commonplace in math, when we see “generated”, this means we sort of let the definition “take care of itself”. So, if Ω = , maybe I want 𝒜 to contain all open sets. Well, then the definition means it should contain all complements too, so it contains all the closed sets. Then it has to contain all the half-open intervals too, and then…. Rather than try to reason out what exactly the final shape 𝒜 looks like (which basically turns out to be impossible), we just give up and say “𝒜 is all the sets you can get if you start with the open sets and apply repeatedly union/complement operations”. Or even more bluntly: “start with closed sets, shake vigorously”.

I’ve gone on too long with no examples.

Example 34.3.3 (Examples of measurable spaces)
The first two examples actually say what 𝒜 is; the third example (most important) will use generation.

(a)
If Ω is any set, then the power set 𝒜 = 2Ω is obviously a σ-algebra. This will be used if Ω is countably finite, but it won’t be very helpful if Ω is huge.
(b)
If Ω is an uncountable set, then we can declare 𝒜 to be all subsets of Ω which are either countable, or which have countable complement. (You should check this satisfies the definitions.) This is a very “coarse” algebra.
(c)
If Ω is a topological space, the Borel σ-algebra is defined as the σ-algebra generated by all the open sets of Ω. We denote it by (Ω), and call the space a Borel space. As warned earlier, it is basically impossible to describe what it looks like, and instead you should think of it as saying “we can measure the open sets”.

Question 34.3.4. Show that the closed sets are in (Ω) for any topological space Ω. Show that [0,1) is also in ().

34.4  Measure spaces

Definition 34.4.1. Measurable spaces (Ω, 𝒜 ) are then equipped with a function μ: 𝒜 [0,+] called the measure, which is required to satisfy the following axioms:

The triple (Ω, 𝒜 ) is called a measure space. It’s called a probability space if μ(Ω) = 1.

Exercise 34.4.2 (Weaker equivalent definitions). I chose to give axioms for 𝒜 and μ that capture how people think of them in practice, which means there is some redundancy: for example, being closed under complements and unions is enough to get intersections, by de Morgan’s law. Here are more minimal definitions, which are useful if you are trying to prove something satisfies them to reduce the amount of work you have to do:

(a)
The axioms on 𝒜 can be weakened to (i) 𝒜 and (ii) 𝒜 is closed under complements and countable disjoint unions.
(b)
The axioms on μ can be weakened to (i) μ() = 0, (ii) μ(A B) = μ(A) + μ(B), and (iii) for A1 A2 ⋅⋅⋅, we have μ(⋂  A )
  n  n = limnμ(An).

Remark 34.4.3 — Here are some immediate remarks on these definitions.

We don’t want to allow uncountable unions and additivity, because uncountable sums basically never work out. In particular, there is a nice elementary exercise as follows:

Exercise 34.4.4 (Tricky). Let S be an uncountable set of positive real numbers. Show that some finite subset T S has sum greater than 102019. Colloquially, “uncountable many positive reals cannot have finite sum”.

So countable sums are as far as we’ll let the infinite sums go. This is the reason why we considered σ-algebras in the first place.

Example 34.4.5 (Measures)
We now discuss measures on each of the spaces in our previous examples.

(a)
If 𝒜 = 2Ω (or for that matter any 𝒜 ) we may declare μ(A) = |A| for each A 𝒜 (even if |A| = ). This is called the counting measure, simply counting the number of elements.

This is useful if Ω is countably infinite, and optimal if Ω is finite (and nonempty). In the latter case, we will often normalize by μ(A) = |A|-
|Ω| so that Ω becomes a probability space.

(b)
Suppose Ω was uncountable and we took 𝒜 to be the countable sets and their complements. Then
       {
         0  A  is countable
μ(A ) =
         1  Ω ∖ A is countable

is a measure. (Check this.)

(c)
Elephant in the room: defining a measure on (Ω) is hard even for Ω = , and is done in the next chapter. So you will have to hold your breath. Right now, all you know is that by declaring my intent to define a measure (Ω), I am hoping that at least every open set will have a volume.

34.5  A hint of Banach-Tarski

I will now try to convince you that (Ω) is a necessary concession, and for general topological spaces like Ω = n, there is no hope of assigning a measure to 2Ω. (In the literature, this example is called a Vitali set.)

Example 34.5.1 (A geometric example why 𝒜 = 2Ω is unsuitable)
Let Ω denote the unit circle in 2 and 𝒜 = 2Ω. We will show that any measure μ on Ω with μ(Ω) = 1 will have undesirable properties.

Let denote an equivalence relation on Ω defined as follows: two points are equivalent if they differ by a rotation around the origin by a rational multiple of π. We may pick a representative from each equivalence class, letting X denote the set of representatives. Then

      ⊔
Ω  =      (X rotated by qπ radians).
      q∈ ℚ
     0≤q<2

Since we’ve only rotated X, each of the rotations should have the same measure m. But μ(Ω) = 1, and there is no value we can assign that measure: if m = 0 we get μ(Ω) = 0 and m > 0 we get μ(Ω) = .

Remark 34.5.2 (Choice) Experts may recognize that picking a representative (i.e. creating set X) technically requires the Axiom of Choice. That is why, when people talk about Banach-Tarski issues, the Axiom of Choice almost always gets honorable mention as well.

Stay tuned to actually see a construction for (n) in the next chapter.

34.6  Measurable functions

In the past, when we had topological spaces, we considered continuous functions. The analog here is:

Definition 34.6.1. Let (X, 𝒜 ) and (Y, ) be measurable spaces (or measure spaces). A function f : X Y is measurable if for any measurable set S Y (i.e. S ) we have fpre(S) is measurable (i.e. fpre(S) 𝒜 ).

In practice, most functions you encounter will be continuous anyways, and in that case we are fine.

Proposition 34.6.2 (Continuous implies Borel measurable)
Suppose X and Y are topological spaces and we pick the Borel measures on both. A function f : X Y which is continuous as a map of topological spaces is also measurable.

Proof. Follows from the fact that pre-images of open sets are open, and the Borel measure is generated by open sets. □

34.7  On the word “almost” (TO DO)

In later chapters we will begin seeing the phrase “almost everywhere” and “almost surely” start to come up, and it seems prudent to take the time to talk about it now.

Definition 34.7.1. We say that property P occurs almost everywhere or almost surely if the set

{ω ∈ Ω | P does not hold for ω}

has measure zero.

For example, if we say “f = g almost everywhere” for some functions f and g defined on a measure space Ω, then we mean that f(ω) = g(ω) for all ω Ω other than a measure-zero set.

There, that’s the definition. The main thing to now update your instincts on is that

In measure theory, we basically only care about things up to almost-everywhere.

Here are some examples:

You can think of this sort of like group isomorphism, where two groups are considered “basically the same” when they are isomorphic, except this one might take a little while to get used to.1

34.8  A few harder problems to think about

Problem 34A. Let (Ω, 𝒜 ) be a probability space. Show that the intersection of countably many sets of measure 1 also has measure 1.

Problem 34B (On countable σ-algebras).       PICLet 𝒜 be a σ-algebra on a set Ω. Suppose that 𝒜 has countable cardinality. Prove that |𝒜 | is finite and equals a power of 2.

35  Constructing the Borel and Lebesgue measure

It’s very difficult to define in one breath a measure on the Borel space (n). It is easier if we define a weaker notion first. There are two such weaker notions that we will define:

It will turn out that pre-measures yield outer measures, and outer measures yield measures.

35.1  Pre-measures

Prototypical example for this section: Let Ω = 2. Then we take 𝒜 0 generated by rectangles, with μ0 the usual area.

The way to define a pre-measure is to weaken the σ-algebra to an algebra.

Definition 35.1.1. Let Ω be a set. We define notions of an algebra, which is the same as σ-algebra except with “countable” replaced by finite everywhere.

That is: an algebra 𝒜 0 on Ω is a nonempty subset of 2Ω, which is closed under complement and finite union. The smallest algebra containing a subset 2Ω is the algebra generated by .

In practice, we will basically always use generation for algebras.

Example 35.1.2
When Ω = n, we can let 0 be the algebra generated by [a1,b1] ×⋅⋅⋅× [an,bn]. A typical element might look like:

Unsurprisingly, since we have finitely many rectangles and their complements involved, in this case we actually can unambiguously assign an area, and will do so soon.

Definition 35.1.3. A pre-measure μ0 on a algebra 𝒜 0 is a function μ0: 𝒜 0 [0,+] which satisfies the axioms

Example 35.1.4 (The pre-measure on n)
Let Ω = 2. Then, let 0 be the algebra generated by rectangles [a1,a2] × [b1,b2]. We then let

μ0 ([a1,a2]× [b1,b2]) = (a2 − a1)(b2 − b1)

the area of the rectangle. As elements of 0 are simply finite unions of rectangles and their complements (picture drawn earlier), it’s not difficult to extend this to a pre-measure λ0 which behaves as you expect — although we won’t do this.

Since we are sweeping something under the rug that turns out to be conceptually important, I’ll go ahead and blue-box it.

Proposition 35.1.5 (Geometry sanity check that we won’t prove)
For Ω = n and 0 the algebra generated by rectangular prisms, one can define a pre-measure λ0 on 0.

From this point forwards, we will basically do almost no geometry1 whatsoever in defining the measure (n), and only use set theory to extend our measure. So, ??  is the only sentry which checks to make sure that our “initial definition” is sane.

To put the point another way, suppose an insane scientist2 tried to define a notion of area in which every rectangle had area 1. Intuitively, this shouldn’t be possible: every rectangle can be dissected into two halves and we ought to have 1 + 11. However, the only thing that would stop them is that they couldn’t extend their pre-measure on the algebra 0. If they somehow got past that barrier and got a pre-measure, nothing in the rest of the section would prevent them from getting an entire bona fide measure with this property. Thus, in our construction of the Lebesgue measure, most of the geometric work is captured in the (omitted) proof of ?? .

35.2  Outer measures

Prototypical example for this section: Keep taking Ω = 2; see the picture to follow.

The other way to weaken a measure is to relax the countable additivity, and this yields the following:

Definition 35.2.1. An outer measure μ on a set Ω is a function μ: 2Ω [0,+] satisfying the following axioms:

(I don’t really like the word “outer measure”, since I think it is a bit of a misnomer: I would rather call it “fake measure”, since it’s not a measure either.)

The reason for the name “outer measure” is that you almost always obtain outer measures by approximating them from “outside” sets. Officially, the result is often stated as follows (as ?? ).

For a set Ω, let be any subset of 2Ω and let ρ: ℰ→ [0,+] be any function. Then

           { ∞∑                      ∞⋃     }
μ∗(E ) = inf    ρ(En ) | En ∈ ℰ, E ⊆   En
             n=1                    n=1

is an outer measure.

However, I think the above theorem is basically always wrong to use in practice, because it is way too general. As I warned with the insane scientist, we really do want some sort of sanity conditions on ρ: otherwise, if we apply the above result as stated, there is no guarantee that μ will be compatible with ρ in any way.

So, I think it is really better to apply the theorem to pre-measures μ0 for which one does have some sort of guarantee that the resulting μ is compatible with μ0. In practice, this is always how we will want to construct our outer measures.

Theorem 35.2.2 (Constructing outer measures from pre-measures)
Let μ0 be a pre-measure on an algebra 𝒜 0 on a set Ω.

(a)
The map μ: 2Ω [0,+] defined by
           { ∞                         ∞    }
 ∗           ∑                        ⋃
μ (E) = inf     μ0(An ) | An ∈ 𝒜0, E ⊆   An
             n=1                      n=1

is an outer measure.

(b)
Moreover, this measure agrees with μ0 on sets in 𝒜 0.

Intuitively, what is going on is that μ(A) is the infimum of coverings of A by countable unions of elements in 𝒜 0. Part (b) is the first half of the compatibility condition I promised; the other half appears later as ?? .

Proof of ?? . As alluded to already, part (a) is a special case of ??  (and proving it in this generality is actually easier, because you won’t be distracted by unnecessary properties).

We now check (b), that μ(A) = μ0(A) for A 𝒜 0. One bound is quick:

Question 35.2.3. Show that μ(A) μ0(A).

For the reverse, suppose that A nAn. Then, define the sets

B1 = A A1
B2 = (A A2) B1
B3 = (A A3) B2
.
..

and so on. Then the Bn are disjoint elements of 𝒜 0 with Bn An, and we have rigged the definition so that nBn = A. Thus by definition of pre-measure,

         ∑           ∑
μ0 (A ) =    μ0(Bn ) ≤   μ0 (An )
          n           n

as desired. □

Example 35.2.4
Let Ω = 2 and λ0 the pre-measure from before. Then λ(A) is, intuitively, the infimum of coverings of the set A by rectangles. Here is a picture you might use to imagine the situation with A being the unit disk.

SVG-Viewer needed.

35.3  Carathéodory extension for outer measures

We will now take any outer measure and turn it into a proper measure. To do this, we first need to specify the σ-algebra on which we will define the measure.

Definition 35.3.1. Let μ be an outer measure. We say a set A is Carathéodory measurable with respect to μ, or just μ-measurable, if the following condition holds: for any set E 2Ω,

μ∗(E) = μ∗(E ∩ A )+ μ∗(E ∖ A).

This definition is hard to motivate, but turns out to be the right one. One way to motivate is this: it turns out that in n, it will be equivalent to a reasonable geometric condition (which I will state in ?? ), but since that geometric definition requires information about n itself, this is the “right” generalization for general measure spaces.

Since our goal was to extend our 𝒜 0, we had better make sure this definition lets us measure the initial sets that we started with!

Proposition 35.3.2 (Carathéodory measurability is compatible with the initial 𝒜 0)
Suppose μ was obtained from a pre-measure μ0 on an algebra 𝒜 0, as in ?? . Then every set in 𝒜 0 is μ-measurable.

This is the second half of the compatibility condition that we get if we make sure our initial μ0 at least satisfies the pre-measure axioms. (The first half was (b) of ?? .)

Proof. Let A 𝒜 0 and E 2Ω; we wish to prove μ(E) = μ(E A) + μ(E A). The definition of outer measure already requires μ(E) μ(E A) + μ(E A) and so it’s enough to prove the reverse inequality.

By definition of infimum, for any 𝜀 > 0, there is a covering E nAn with μ(E) + 𝜀 nμ0(An). But

∑           ∑
   μ0(An ) =    (μ0 (An ∩ A )+ μ0(An ∖ A)) ≥ μ∗(E ∩ A) + μ∗(E ∖A )
 n           n

with the first equality being the definition of pre-measure on 𝒜 0, the second just being by definition of μ (since An A certainly covers E A, for example). Thus μ(E) + 𝜀 μ(E A) + μ(E A). Since the inequality holds for any 𝜀 > 0, we’re done. □

To add extra icing onto the cake, here is one more niceness condition which our constructed measure will happen to satisfy.

Definition 35.3.3. A null set of a measure space (Ω, 𝒜 ) is a set A 𝒜 with μ(A) = 0. A measure space (Ω, 𝒜 ) is complete if whenever A is a null set, then all subsets of A are in 𝒜 as well (and hence null sets).

This is a nice property to have, for obvious reasons. Visually, if I have a bunch of dust which I already assigned weight zero, and I blow away some of the dust, then the remainder should still have an assigned weight — zero. The extension theorem will give us σ-algebras with this property.

Theorem 35.3.4 (Carathéodory extension theorem for outer measures)
If μ is an outer measure, and 𝒜 cm is the set of μ-measurable sets with respect to μ, then 𝒜 cm is a σ-algebra on Ω, and the restriction μcm of μ to 𝒜 cm gives a complete measure space.

(Phonetic remark: you can think of the superscript cm as standing for either “Carathéodory measurable” or “complete”. Both are helpful for remembering what this represents. This notation is not standard but the pun was too good to resist.)

Thus, if we compose ??  with ?? , we find that every pre-measure μ0 on an algebra 𝒜 0 naturally gives a σ-algebra 𝒜 cm with a complete measure μcm, and our two compatibility results (namely (b) of ?? , together with ?? ) means that 𝒜 cm𝒜 0 and μcm agrees with μ.

Here is a table showing the process, where going down each row of the table corresponds to restriction process.

Construct orderNotes




2Ω μ Step 2 μ is outer measure obtained from μ0
    
𝒜 cmμcm Step 3 𝒜 cm defined as μ-measurable sets,
(𝒜 cmcm) is complete.
    
𝒜 0 μ0 Step 1 μ0 is a pre-measure

35.4  Defining the Lebesgue measure

This lets us finally define the Lebesgue measure on n. We wrap everything together at once now.

Definition 35.4.1. We create a measure on n by the following procedure.

The resulting complete measure, denoted λ, is called the Lebesgue measure.

The algebra (n) we obtained will be called the Lebesgue σ-algebra; sets in it are said to be Lebesgue measurable.

Here is the same table from before, with the values filled in for the special case Ω = n, which gives us the Lebesgue algebra.

Construct orderNotes




2n λ Step 2 λ is Lebesgue outer measure
    
(n)λ Step 3 Lebesgue σ-algebra (complete)
    
0 λ0 Step 1 Define pre-measure on rectangles

Of course, now that we’ve gotten all the way here, if we actually want to compute any measures, we can mostly gleefully forget about how we actually constructed the measure and just use the properties. The hard part was to showing that there is a way to assign measures consistently; actually figuring out what that measure’s value is given that it exists is often much easier. Here is an example.

Example 35.4.2 (The Cantor set has measure zero)
The standard middle-thirds Cantor set is the subset [0,1] obtained as follows: we first delete the open interval (13,23). This leaves two intervals [0,13] and [23,1] from which we delete the middle thirds again from both, i.e. deleting (19,29) and (79,89). We repeat this procedure indefinitely and let C denote the result. An illustration is shown below.

PIC
Image from [?]

It is a classic fact that C is uncountable (it consists of ternary expansions omitting the digit 1). But it is measurable (it is an intersection of closed sets!) and we contend it has measure zero. Indeed, at the nth step, the result has measure (23)n leftover. So μ(C) (23)n for every n, forcing μ(C) = 0.

This is fantastic, but there is one elephant in the room: how are the Lebesgue σ-algebra and the Borel σ-algebra related? To answer this question briefly, I will state two results (but another answer is given in the next section). The first is a geometric interpretation of the strange Carathéodory measurable hypothesis.

Proposition 35.4.3 (A geometric interpretation of Lebesgue measurability)
A set A n is Lebesgue measurable if and only if for every 𝜀 > 0, there is an open set U A such that

  ∗
λ (U ∖ A) < 𝜀

where λ is the Lebesgue outer measure.

I want to say that this was Lebesgue’s original formulation of “measurable”, but I’m not sure about that. In any case, we won’t need to use this, but it’s good to see that our definition of Lebesgue measurable has a down-to-earth geometric interpretation.

Question 35.4.4. Deduce that every open set is Lebesgue measurable. Conclude that the Lebesgue σ-algebra contains the Borel σ-algebra. (A different proof is given later on.)

However, the containment is proper: there are more Lebesgue measurable sets than Borel ones. Indeed, it can actually be proven using transfinite induction (though we won’t) that |ℬ (ℝ)| = |ℝ|. Using this, one obtains:

Exercise 35.4.5. Show the Borel σ-algebra is not complete. (Hint: consider the Cantor set. You won’t be able to write down an example of a non-measurable set, but you can use cardinality arguments.) Thus the Lebesgue σ-algebra strictly contains the Borel one.

Nonetheless, there is a great way to describe the Lebesgue σ-algebra, using the idea of completeness.

Definition 35.4.6. Let (Ω, 𝒜 ) be a measure space. The completion,𝒜 ,μ) is defined as follows: we let

𝒜--= {A ∪ N  | A ∈ 𝒜 ,N subset of null set}.

and μ(A N) = μ(A). One can check this is well-defined, and in fact μ is the unique extension of μ from 𝒜 to 𝒜 .

This looks more complicated than it is. Intuitively, all we are doing is “completing” the measure by telling μ to regard any subset of a null set as having measure zero, too.

Then, the saving grace:

Theorem 35.4.7 (Lebesgue is completion of Borel)
For n, the Lebesgue measure is the completion of the Borel measure.

Proof. This actually follows from results in the next section, namely ??  and part (c) of Carathéodory for pre-measures (?? ). □

35.5  A fourth row: Carathéodory for pre-measures

Prototypical example for this section: The fourth row for the Lebesgue measure is (n).

In many cases, 𝒜 cm is actually bigger than our original goal, and instead we only need to extend μ0 on 𝒜 0 to μ on 𝒜 , where 𝒜 is the σ-algebra generated by 𝒜 0. Indeed, our original goal was to get (n), and in fact:

Exercise 35.5.1. Show that (n) is the σ-algebra generated by the 0 we defined earlier.

Fortunately, this restriction is trivial to do.

Question 35.5.2. Show that 𝒜 cm𝒜 , so we can just restrict μcm to 𝒜 .

We will in a moment add this as the fourth row in our table.

However, if this is the end goal, than a somewhat different Carathéodory theorem can be stated because often one more niceness condition holds:

Definition 35.5.3. A pre-measure or measure μ on Ω is σ-finite if Ω can be written as a countable union Ω = nAn with μ(An) < for each n.

Question 35.5.4. Show that the pre-measure λ0 we had, as well as the Borel measure (n), are both σ-finite.

Actually, for us, σ-finite is basically always going to be true, so you can more or less just take it for granted.

Theorem 35.5.5 (Carathéodory extension theorem for pre-measures)
Let μ0 be a pre-measure on an algebra 𝒜 0 of Ω, and let 𝒜 denote the σ-algebra generated by 𝒜 0. Let 𝒜 cm, μcm be as in ?? . Then:

(a)
The restriction of μcm to 𝒜 gives a measure μ extending μ0.
(b)
If μ0 was σ-finite, then μ is the unique extension of μ0 to 𝒜 .
(c)
If μ0 was σ-finite, then μcm is the completion of μ, hence the unique extension of μ0 to 𝒜 cm.

Here is the updated table, with comments if μ0 was indeed σ-finite.

Construct orderNotes




2Ω μ Step 2 μ is outer measure obtained from μ0
    
𝒜 cmμcm Step 3 (𝒜 cmcm) is completion (𝒜 ),
𝒜 cm defined as μ-measurable sets
    
𝒜 μ Step 4 𝒜 defined as σ-alg. generated by 𝒜 0
    
𝒜 0 μ0 Step 1 μ0 is a pre-measure

And here is the table for Ω = n, with Borel and Lebesgue in it.

Construct orderNotes




2n λ Step 2 λ is Lebesgue outer measure
    
(n) λ Step 3 Lebesgue σ-algebra, completion of Borel one
    
(n)μ Step 4 Borel σ-algebra, generated by 0
    
0 λ0 Step 1 Define pre-measure on rectangles

Going down one row of the table corresponds to restriction, while each of μ0 μ μcm is a unique extension when μ0 is σ-finite.

Proof of ?? . For (a): this is just ??  and ??  put together, combined with the observation that 𝒜 𝒜 0 and hence 𝒜 𝒜 . Parts (b) and (c) are more technical, and omitted. □

35.6  From now on, we assume the Borel measure

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

35.7  A few harder problems to think about

Problem 35A (Constructing outer measures from arbitrary ρ). For a set Ω, let be any subset of 2Ω and let ρ: ℰ→ [0,+] be any function. Prove that

           {                              }
 ∗           ∞∑                      ∞⋃
μ (E ) = inf    ρ(En ) | En ∈ ℰ, E ⊆   En
             n=1                    n=1

is an outer measure.

Problem 35B (The insane scientist). Let Ω = 2, and let be the set of (non-degenerate) rectangles. Let ρ(E) = 1 for every rectangle E ∈ℰ. Ignoring my advice, the insane scientist uses ρ to construct an outer measure μ, as in ?? .

(a)
Find μ(S) for each subset S of 2.
(b)
Which sets are μ-measurable?

You should find that no rectangle is μ-measurable, unsurprisingly foiling the scientist.

Problem 35C.       PICA function f : is continuous. Must f be measurable with respect to the Lebesgue measure on ?

36  Lebesgue integration

On any measure space (Ω, 𝒜 ) we can then, for a function f : Ω [0,] define an integral

∫
   f dμ.
 Ω

This integral may be +(even if f is finite). As the details of the construction won’t matter for us later on, we will state the relevant definitions, skip all the proofs, and also state all the properties that we actually care about. Consequently, this chapter will be quite short.

36.1  The definition

The construction is done in four steps.

Definition 36.1.1. If A is a measurable set of Ω, then the indicator function 1A: Ω is defined by

        {
1 (ω ) =  1  ω ∈ A
 A        0  ω ∕∈ A.

Step 1 (Indicator functions) For an indicator function, we require

∫
   1  dμ := μ(A )
 Ω  A

(which may be infinite).

We extend this linearly now for nonnegative functions which are sums of indicators: these functions are called simple functions.

Step 2 (Simple functions) Let A1, …, An be a finite collection of measurable sets. Let c1, …, cn be either nonnegative real numbers or +. Then we define

   (         )
∫    ∑n              ∑n
        ci1Ai   dμ :=     ciμ(Ai).
 Ω   i=1              i=1

If ci = and μ(Ai) = 0, we treat ciμ(Ai) = 0.

One can check the resulting sum does not depend on the representation of the simple function as ci1Ai. In particular, it is compatible with the previous step.

Conveniently, this is already enough to define the integral for f : Ω [0,+]. Note that [0,+] can be thought of as a topological space where we add new open sets (a,+] for each real number a to our usual basis of open intervals. Thus we can equip it with the Borel sigma-algebra.1

Step 3 (Nonnegative functions) For each measurable function f : Ω [0,+], let

∫               (∫      )
   f dμ :=  sup      s d μ
 Ω        0≤s≤f   Ω

where the supremum is taken over all simple s such that 0 s f. As before, this integral may be +.

One can check this is compatible with the previous definitions. At this point, we introduce an important term.

Definition 36.1.2. A measurable (nonnegative) function f : Ω [0,+] is absolutely integrable or just integrable if Ωf dμ < .

Warning: I find “integrable” to be really confusing terminology. Indeed, every measurable function from Ω to [0,+] can be assigned a Lebesgue integral, it’s just that this integral may be +. So the definition is far more stringent than the name suggests. Even constant functions can fail to be integrable:

Example 36.1.3 (We really should call it “finitely integrable”)
The constant function 1 is not integrable on , since 1 = μ() = +.

For this reason, I will usually prefer the term “integrable”. (If it were up to me, I would call it “finitely integrable”, and usually do so privately.)

Finally, this lets us integrate general functions.

Definition 36.1.4. In general, a measurable function f : Ω [−∞,] is absolutely integrable or just integrable if |f| is.

Since we’ll be using the first word, this is easy to remember: “absolutely integrable” requires taking absolute values.

Step 4 (Absolutely integrable functions) If f : Ω [−∞,] is absolutely integrable, then we define

f+(x) = max{f(x),0}
f(x) = min{f(x),0}

and set

∫         ∫           ∫
   f dμ =    |f+ | dμ −   |f− | dμ
 Ω         Ω           Ω

which in particular is finite.

You may already start to see that we really like nonnegative functions: with the theory of measures, it is possible to integrate them, and it’s even okay to throw in +’s everywhere. But once we start dealing with functions that can be either positive or negative, we have to start adding finiteness restrictions — actually essentially what we’re doing is splitting the function into its positive and negative part, requiring both are finite, and then integrating.

To finish this section, we state for completeness some results that you probably could have guessed were true. Fix Ω = (Ω, 𝒜 ), and let f and g be measurable real-valued functions such that f(x) = g(x) almost everywhere.

There are more famous results like monotone/dominated convergence that are also true, but we won’t state them here as we won’t really have a use for them in the context of probability. (They appear later on in a bonus chapter.)

36.2  Relation to Riemann integrals (or: actually computing Lebesgue integrals)

For closed intervals, this actually just works out of the box.

Theorem 36.2.1 (Lebesgue integral generalizes Riemann integral)
Let f : [a,b] be a Riemann integrable function (where [a,b] is equipped with the Borel measure). Then f is also Lebesgue integrable and the integrals agree:

∫ b         ∫
   f(x) dx =     f dμ.
 a            [a,b]

Thus in practice, we do all theory with Lebesgue integrals (they’re nicer), but when we actually need to compute [1,4]x2 we just revert back to our usual antics with the Fundamental Theorem of Calculus.

Example 36.2.2 (Integrating x2 over [1,4])
Reprising our old example:

∫           ∫ 4
     x2 dμ =    x2 dx = 1⋅43 − 1-⋅13 = 21.
 [1,4]         1         3      3

This even works for improper integrals, if the functions are nonnegative. The statement is a bit cumbersome to write down, but here it is.

Theorem 36.2.3 (Improper integrals are nice Lebesgue ones)
Let f 0 be a nonnegative continuous function defined on (a,b) , possibly allowing a = −∞ or b = . Then

∫                ∫ b′
     f d μ = lim      f(x) dx
 (a,b)       a′→a+  a′
            b′→b−

where we allow both sides to be +if f is not absolutely integrable.

The right-hand side makes sense since [a,b] (a,b) is a compact interval on which f is continuous. This means that improper Riemann integrals of nonnegative functions can just be regarded as Lebesgue ones over the corresponding open intervals.

It’s probably better to just look at an example though.

Example 36.2.4 (Integrating 1√--
 x on (0,1))
For example, you might be familiar with improper integrals like

∫ 1              ∫ 1              (           )
   √1--dx :=  lim     √1--dx = lim   2√1-− 2√ 𝜀  = 2.
 0   x       𝜀→0+   𝜀   x      𝜀→0+

(Note this appeared before as ?? .) In the Riemann integration situation, we needed the limit as 𝜀 0+ since otherwise √1-
 x is not defined as a function [0,1] . However, it is a measurable nonnegative function (0,1) [0,+], and hence

∫
     √1--dμ = 2.
 (0,1)  x

If f is not nonnegative, then all bets are off. Indeed ??  is the famous counterexample.

36.3  A few harder problems to think about

Problem 36A (The indicator of the rationals). Take the indicator function 1 : →{0,1}⊆ for the rational numbers.

(a)
Prove that 1 is not Riemann integrable.
(b)
Show that 1 exists and determine its value — the one you expect!

Problem 36B (An improper Riemann integral with sign changes). Define f : (1,) by f(x) = sinx(x)-. Show that f is not absolutely integrable, but that the improper Riemann integral

∫ ∞                ∫ b
    f (x) dx := lim     f(x) dx
 1             b→ ∞  a

nonetheless exists.

37  Swapping order with Lebesgue integrals

37.1  Motivating limit interchange

Prototypical example for this section: 1 is good!

One of the issues with the Riemann integral is that it behaves badly with respect to convergence of functions, and the Lebesgue integral deals with this. This is therefore often given as a poster child or why the Lebesgue integral has better behaviors than the Riemann one.

We technically have already seen this: consider the indicator function 1, which is not Riemann integrable by ?? . But we can readily compute its Lebesgue integral over [0,1], as

∫

 [0,1]1ℚ dμ = μ ([0,1] ∩ℚ ) = 0

since it is countable.

This could be thought of as a failure of convergence for the Riemann integral.

Example 37.1.1 (1 is a limit of finitely supported functions)
We can define the sequence of functions g1, g2, … by

        {
         1   (n!)x is an integer
gn(x) =
         0   else.

Then each gn is piecewise continuous and hence Riemann integrable on [0,1] (with integral zero), but limn→∞gn = 1 is not.

The limit here is defined in the following sense:

Definition 37.1.2. Let f and f1,f2,: Ω be a sequence of functions. Suppose that for each ω Ω, the sequence

f1(ω), f2(ω), f3(ω), ,...

converges to f(ω). Then we say (fn)n converges pointwise to the limit f, written limn→∞fn = f.

We can define liminf n→∞fn and limsupn→∞fn similarly.

This is actually a fairly weak notion of convergence, for example:

Exercise 37.1.3 (Witch’s hat). Find a sequence of continuous function on [1,1] which converges pointwise to the function f given by

      {1   x = 0
f(x) =
        0  otherwise.

This is why when thinking about the Riemann integral it is commonplace to work with stronger conditions like “uniformly convergent” and the like. However, with the Lebesgue integral, we can mostly not think about these!

37.2  Overview

The three big-name results for exchanging pointwise limits with Lebesgue integrals is:

37.3  Fatou’s lemma

Without further ado:

Lemma 37.3.1 (Fatou’s lemma)
Let f1,f2,: Ω [0,+] be a sequence of nonnegative measurable functions. Then liminf n: Ω [0,+] is measurable and

∫                         (∫       )
   (        )
 Ω  limn→i∞nffn  dμ ≤ linm→ in∞f   Ω fn dμ  .

Here we allow either side to be +.

Notice that there are no extra hypothesis on fn other than nonnegative: which makes this quite surprisingly versatile if you ever are trying to prove some general result.

37.4  Everything else

The big surprise is how quickly all the “big-name” theorem follows from Fatou’s lemma. Here is the so-called “monotone convergence theorem”.

Corollary 37.4.1 (Monotone convergence theorem)
Let f and f1,f2,: Ω [0,+] be a sequence of nonnegative measurable functions such that limnfn = f and fn(ω) f(ω) for each n. Then f is measurable and

    ( ∫       )   ∫
lim      f dμ   =    f dμ.
n→∞    Ω  n        Ω

Here we allow either side to be +.

Proof. We have

Ωf dμ = Ω(         )
  linm→ in∞f fn
liminf n→∞ Ωfn
limsupn→∞ Ωfn
Ωf dμ

where the first is by Fatou lemma, and the second by the fact that Ωfn Ωf for every n. This implies all the inequalities are equalities and we are done. □

Remark 37.4.2 (The monotone convergence theorem does not require monotonicity!) In the literature it is much more common to see the hypothesis f1(ω) f2(ω) ⋅⋅⋅ f(ω) rather than just fn(ω) f(ω) for all n, which is where the theorem gets its name. However as we have shown this hypothesis is superfluous! This is pointed out in https://mathoverflow.net/a/296540/70654, as a response to a question entitled “Do you know of any very important theorems that remain unknown?”.

Example 37.4.3 (Monotone convergence gives 1)
This already implies ?? . Letting gn be the indicator function for 1
n!as described in that example, we have gn 1 and limn→∞gn(x) = 1(x), for each individual x. So since [0,1]gn = 0 for each n, this gives [0,1]1 = 0 as we already knew.

The most famous result, though is the following.

Corollary 37.4.4 (Fatou–Lebesgue theorem)
Let f and f1,f2,: Ω be a sequence of measurable functions. Assume that g: Ω is an absolutely integrable function for which |fn(ω)|≤|g(ω)| for all ω Ω. Then the inequality

Ω(        )
 lim inffn
  n→∞ liminf n→∞( ∫       )
     fn dμ
   Ω
limsupn→∞(        )
 ∫
    fn dμ
   Ω Ω(          )

  limn→su∞p fn dμ.

Proof. There are three inequalities:

Exercise 37.4.5. Where is the fact that g is absolutely integrable used in this proof?

Corollary 37.4.6 (Dominated convergence theorem)
Let f1,f2,: Ω be a sequence of measurable functions such that f = limn→∞fn exists. Assume that g: Ω is an absolutely integrable function for which |fn(ω)|≤|g(ω)| for all ω Ω. Then

∫             (∫       )
  f dμ =  lim      fn dμ  .
 Ω       n→ ∞    Ω

Proof. If f(ω) = limn→∞fn(ω), then f(ω) = liminf n→∞fn(ω) = limsupn→∞fn(ω). So all the inequalities in the Fatou-Lebesgue theorem become equalities, since the leftmost and rightmost sides are equal. □

Note this gives yet another way to verify ?? . In general, the dominated convergence theorem is a favorite cliché for undergraduate exams, because it is easy to create questions for it. Here is one example showing how they all look.

Example 37.4.7 (The usual Lebesgue dominated convergence examples)
Suppose one wishes to compute

     (∫               )
            nsin(n−1x)-
nli→m∞    (0,1)    √x--      dx

then one starts by observing that the inner term is bounded by the absolutely integrable function x12. Therefore it equals

(0,1) limn→∞(           )
 n-sin-(n-−1x)
     √x-- dx = (0,1)-x-
√x- dx
= (0,1)√ --
  x dx = 2
--
3.

37.5  Fubini and Tonelli

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

37.6  A few harder problems to think about

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

38  Bonus: A hint of Pontryagin duality

In this short chapter we will give statements about how to generalize our Fourier analysis (a bonus chapter ?? ) to a much wider class of groups G.

38.1  LCA groups

Prototypical example for this section: 𝕋, .

Earlier we played with , which is nice because in addition to being a topological space, it is also an abelian group under addition. These sorts of objects which are both groups and spaces have a name.

Definition 38.1.1. A group G is a topological group is a Hausdorff1 topological space equipped also with a group operation (G,), such that both maps

G × G G by (x,y)↦→xy
G G by x↦→x1

are continuous.

For our Fourier analysis, we need some additional conditions.

Definition 38.1.2. A locally compact abelian (LCA) group G is one for which the group operation is abelian, and moreover the topology is locally compact: for every point p of G, there exists a compact subset K of G such that K p, and K contains some open neighborhood of p.

Our previous examples all fall into this category:

Example 38.1.3 (Examples of locally compact abelian groups)

These conditions turn out to be enough for us to define a measure on the space G. The relevant theorem, which we will just quote:

Theorem 38.1.4 (Haar measure)
Let G be a locally compact abelian group. We regard it as a measurable space using its Borel σ-algebra (G). There exists a measure μ: (G) [0,], called the Haar measure, satisfying the following properties:

Moreover, it is unique up to scaling by a positive constant.

Remark 38.1.5 — Note that if G is compact, then μ(G) is finite (and positive). For this reason the Haar measure on a LCA group G is usually normalized so μ(G) = 1.

For this chapter, we will only use the first two properties at all, and the other two are just mentioned for completeness. Note that this actually generalizes the chapter where we constructed a measure on (n), since n is an LCA group!

So, in short: if we have an LCA group, we have a measure μ on it.

38.2  The Pontryagin dual

Now the key definition is:

Definition 38.2.1. Let G be an LCA group. Then its Pontryagin dual is the abelian group

G^:= {continuous group  homomorphisms  ξ : G → 𝕋}.

The maps ξ are called characters. It can be itself made into an LCA group.2

Example 38.2.2 (Examples of Pontryagin duals)

Exercise 38.2.3 (Z∼=Z, for those who read ?? ). If Z is a finite abelian group, show that Z∼=Z, using the results of the previous example. You may now recognize that the bilinear form : Z × Z 𝕋 is exactly a choice of isomorphism Z Z. It is not “canonical”.

True to its name as the dual, and in analogy with (V )∼=V for vector spaces V , we have:

Theorem 38.2.4 (Pontryagin duality theorem)
For any LCA group G, there is an isomorphism

G ∼= ^G^    by    x ↦→  (ξ ↦→ ξ(x)).

The compact case is especially nice.

Proposition 38.2.5 (G compact G discrete)
Let G be an LCA group. Then G is compact if and only if G is discrete.

Proof. ?? . □

38.3  The orthonormal basis in the compact case

Let G be a compact LCA group, and work with its Haar measure. We may now let L2(G) be the space of square-integrable functions to , i.e.

        {                       ∫          }
L2(G ) =  f: G → ℂ   such that     |f |2 < ∞   .
                                 G

Thus we can equip it with the inner form

        ∫
⟨f,g⟩ =    f ⋅g.
         G

In that case, we get all the results we wanted before:

Theorem 38.3.1 (Characters of G forms an orthonormal basis)
Assume G is LCA and compact (so G is discrete). Then the characters

(eξ)ξ∈G^    by    eξ(x) = e(ξ(x)) = exp(2πiξ(x))

form an orthonormal basis of L2(G). Thus for each f L2(G) we have

f = ∑   ^f(ξ)e
            ξ
    ξ∈^G

where

               ∫
 ^
f(ξ) = ⟨f,eξ⟩ =   f(x)exp (− 2πiξ(x)) dμ.
                G

The sum ξG makes sense since G is discrete. In particular,

38.4  The Fourier transform of the non-compact case

If G is LCA but not compact, then Theorem 38.3.1 becomes false. On the other hand, it’s still possible to define G. We can then try to write the Fourier coefficients anyways: let

       ∫
 ^          --
f(ξ) =    f ⋅eξ dμ
        G

for ξ G and f : G . The results are less fun in this case, but we still have, for example:

Theorem 38.4.1 (Fourier inverison formula in the non-compact case)
Let μ be a Haar measure on G. Then there exists a unique Haar measure ν on G (called the dual measure) such that: whenever f L1(G) and f L1(G), we have

      ∫
          ^
f(x) =  ^f(ξ)ξ(x) dν
        G

for almost all x G (with respect to μ). If f is continuous, this holds for all x.

So while we don’t have the niceness of a full inner product from before, we can still in some situations at least write f as integral in sort of the same way as before.

In particular, they have special names for a few special G:

38.5  Summary

We summarize our various flavors of Fourier analysis from the previous sections in the following table. In the first part G is compact, in the second half G is not.

-------------------------------------------------------------------------
-Name-----------------------------Domain--G---Dual--^G---------C∏haracters-
 Binary Fourier analysis          {±1 }n      S ⊆  {1,...,n }    s∈S xs
 Fourier transform on finite groups Z          ξ ∈ ^Z ∼= Z       e(iξ ⋅x )

 Discrete Fourier transform       ℤ ∕∼nℤ       ξ ∈ ℤ∕n ℤ       e(ξx∕n )
-Fourier series--------------------𝕋-=-[−-π,π]-n-∈-ℤ-----------exp(inx)---
 Continuous Fourier transform     ℝ           ξ ∈ ℝ           e(ξx)
 Discrete time Fourier transform   ℤ           ξ ∈ 𝕋 ∼= [− π, π] exp(iξn)

You might notice that the various names are awful. This is part of the reason I got confused as a high school student: every type of Fourier series above has its own Wikipedia article. If it were up to me, we would just use the term “G-Fourier transform”, and that would make everyone’s lives a lot easier.

38.6  A few harder problems to think about

Problem 38A. If G is compact, so G is discrete, describe the dual measure ν.

Problem 38B. Show that an LCA group G is compact if and only if G is discrete. (You will need the compact-open topology for this.)

Part XI
Probability (TO DO)

39  Random variables (TO DO)

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

Having properly developed the Lebesgue measure and the integral on it, we can now proceed to develop random variables.

39.1  Random variables

With all this set-up, random variables are going to be really quick to define.

Definition 39.1.1. A (real) random variable X on a probability space Ω = (Ω, 𝒜 ) is a measurable function X : Ω , where is equipped with the Borel σ-algebra.

In particular, addition of random variables, etc. all makes sense, as we can just add. Also, we can integrate X over Ω, by previous chapter.

Definition 39.1.2 (First properties of random variables). Given a random variable X, the expected value of X is defined by the Lebesgue integral

        ∫
𝔼 [X ] =   X (ω ) dμ.
         Ω

Confusingly, the letter μ is often used for expected values.

The kth moment of X is defined as 𝔼[Xk], for each positive integer k 1. The variance of X is then defined as

           [            ]
Var(X ) = 𝔼 (X − 𝔼 [X ])2 .

Question 39.1.3. Show that 1A is a random variable (just check that it is Borel measurable), and its expected value is μ(A).

An important property of expected value you probably already know:

Theorem 39.1.4 (Linearity of expectation)
If X and Y are random variables on Ω then

𝔼[X + Y ] = 𝔼 [X ]+ 𝔼 [Y ].

Proof. 𝔼[X + Y ] = ΩX(ω) + Y (ω) = ΩX(ω) + ΩY (ω) = 𝔼[X] + 𝔼[Y ]. □

Note that X and Y do not have to be “independent” here: a notion we will define shortly.

39.2  Distribution functions

39.3  Examples of random variables

39.4  Characteristic functions

39.5  Independent random variables

39.6  A few harder problems to think about

Problem 39A (Equidistribution). Let X1, X2, …be i.i.d. uniform random variables on [0,1]. Show that almost surely the Xi are equidistributed, meaning that

     # {1 ≤ i ≤ N | a ≤ X (ω) ≤ b}
lim  -------------------i--------=  b− a    ∀0 ≤ a < b ≤ 1
N→ ∞              N

holds for almost all choices of ω.

Problem 39B (Side length of triangle independent from median). Let X1, Y 1, X2, Y 2, X3, Y 3 be six independent standard Gaussians. Define triangle ABC in the Cartesian plane by A = (X1,Y 1), B = (X2,Y 2), C = (X3,Y 3). Prove that the length of side BC is independent from the length of the A-median.

40  Large number laws (TO DO)

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

40.1  Notions of convergence

40.1.i  Almost sure convergence

Definition 40.1.1. Let X, Xn be random variables on a probability space Ω. We say Xn converges almost surely to X if

  (                        )
μ  ω ∈ Ω : lim Xn (ω ) = X (ω) = 1.
           n

This is a very strong notion of convergence: it says in almost every world, the values of Xn converge to X. In fact, it is almost better for me to give a non-example.

Example 40.1.2 (Non-example of almost sure convergence)
Imagine an immortal skeleton archer is practicing shots, and on the nth shot, he scores a bulls-eye with probability 1 1
n (which tends to 1 because the archer improves over time). Let Xn ∈{0,1,,10} be the score of the nth shot.

Although the skeleton is gradually approaching perfection, there are almost no worlds in which the archer misses only finitely many shots: that is

 (                      )
μ  ω ∈ Ω : limn Xn (ω) = 10 = 0.

40.1.ii  Convergence in probability

Therefore, for many purposes we need a weaker notion of convergence.

Definition 40.1.3. Let X, Xn be random variables on a probability space Ω. We say Xn converges in probability to X if if for every 𝜀 > 0 and δ > 0, we have

μ(ω ∈ Ω : |Xn (ω )− X (ω)| < 𝜀) ≥ 1 − δ

for n large enough (in terms of 𝜀 and δ).

In this sense, our skeleton archer does succeed: for any δ > 0, if n > δ1 then the skeleton archer does hit a bulls-eye in a 1 δ fraction of the worlds. In general, you can think of this as saying that for any δ > 0, the chance of an 𝜀-anomaly event at the nth stage eventually drops below δ.

Remark 40.1.4 — To mask δ from the definition, this is sometimes written instead as: for all 𝜀

 lim μ (ω ∈ Ω : |Xn(ω )− X (ω)| < 𝜀) = 1.
n→ ∞

I suppose it doesn’t make much difference, though I personally don’t like the asymmetry.

40.1.iii  Convergence in law

40.2  A few harder problems to think about

Problem 40A (Quantifier hell).       PICIn the definition of convergence in probability suppose we allowed δ = 0 (rather than δ > 0). Show that the modified definition is equivalent to almost sure convergence.

Problem 40B (Almost sure convorgence is not topologizable). Consider the space of all random variables on Ω = [0,1]. Prove that it’s impossible to impose a metric on this space which makes the following statement true:

A sequence X1, X2, …, of converges almost surely to X if and only if Xi converge to X in the metric.

41  Stopped martingales (TO DO)

41.1  How to make money almost surely

We now take our newfound knowledge of measure theory to a casino.

Here’s the most classical example that shows up: a casino lets us play a game where we can bet any amount of on a fair coin flip, but with bad odds: we win $n if the coin is heads, but lose $2n if the coin is tails, for a value of n of our choice. This seems like a game that no one in their right mind would want to play.

Well, if we have unbounded time and money, we actually can almost surely make a profit.

Example 41.1.1 (Being even greedier than 18th century France)
In the game above, we start by betting $1.

Since the coin will almost surely show heads eventually, we make money whenever that happens. In fact, the expected amount of time until a coin shows heads is only 2 flips! What could go wrong?

This chapter will show that under sane conditions such as “finite time” or “finite money”, one cannot actually make money in this way — the optional stopping theorem. This will give us an excuse to define conditional probabilities, and then talk about martingales (which generalize the fair casino).

Once we realize that trying to extract money from Las Vegas is a lost cause, we will stop gambling and then return to solving math problems, by showing some tricky surprises, where problems that look like they have nothing to do with gambling can be solved by considering a suitable martingale.

In everything that follows, Ω = (Ω, 𝒜 ) is a probability space.

41.2  Sub-σ-algebras and filtrations

Prototypical example for this section: σ-algebra generated by a random variable, and coin flip filtration.

We considered our Ω as a space of worlds, equipped with a σ-algebra 𝒜 that lets us integrate over Ω. However, it is a sad fact of life that at any given time, you only know partial information about the world. For example, at the time of writing, we know that the world did not end in 2012 (see https://en.wikipedia.org/wiki/2012_phenomenon), but the fate of humanity in future years remains at slightly uncertain.

Let’s write this measure-theoretically: we could consider

Ω = A B
A = {ω for which world ends in 2012}
B = {ω for which world does not end in 2012}.

We will assume that A and B are measurable sets, that is, A,B 𝒜 . That means we could have good fun arguing about what the values of μ(A) and μ(B) should be (“a priori probability that the world ends in 2012”), but let’s move on to a different silly example.

We will now introduce a new notion that we will need when we define conditional probabilities later.

Definition 41.2.1. Let Ω = (Ω, 𝒜 ) be a probability space. A sub-σ-algebra on Ω is exactly what it sounds like: a σ-algebra on the set Ω such that each A is measurable (i.e., 𝒜 ).

The motivation is that is the σ-algebra of sets which let us ask questions about some piece of information. For example, in the 2012 example we gave above, we might take = {,A,B,Ω}, which are the sets we care about if we are thinking only about 2012.

Here are some more serious examples.

Example 41.2.2 (Examples of sub-σ-algebras)

(a)
Let X : Ω →{0,1,2} be a random variable taking on one of three values. If we’re interested in X then we could define

A = {ω: X(ω) = 1}
B = {ω: X(ω) = 2}
C = {ω: X(ω) = 3}

then we could write

ℱ  = {∅, A, B,  C, A ∪B,  B ∪ C, C ∪ A, Ω} .

This is a sub-σ-algebra on that lets us ask questions about X like “what is the probability X3”, say.

(b)
Now suppose Y : Ω [0,1] is another random variable. If we are interested in Y , the that captures our curiosity is
        pre
ℱ  = {Y   (B) | B ⊆ [0,1] is measurable } .

You might notice a trend here which we formalize now:

Definition 41.2.3. Let X : Ω be a random variable. The sub-σ-algebra generated by X is defined by

           pre
σ(X ) := {X   (B) | B ⊆ ℝ is measurable }.

If X1, …is a sequence (finite or infinite) of random variables, the sub-σ-algebra generated by them is the smallest σ-algebra which contains σ(Xi) for each i.

Finally, we can put a lot of these together — since we’re talking about time, we learn more as we grow older, and this can be formalized.

Definition 41.2.4. A filtration on Ω = (Ω, 𝒜 ) is a nested sequence1

ℱ0 ⊆  ℱ1 ⊆ ℱ2  ⊆ ...

of sub-σ-algebras on Ω.

Example 41.2.5 (Filtration)
Suppose you’re bored in an infinitely long class and start flipping a fair coin to pass the time. (Accordingly, we could let Ω = {H,T} consist of infinite sequences of heads H and tails T.) We could let n denote the sub-σ-algebra generated by the values of the first n coin flips. So:

Exercise 41.2.6. In the previous example, compute the cardinality | n| for each integer n.

41.3  Conditional expectation

Prototypical example for this section: 𝔼(XX + Y ) for X and Y distributed over [0,1].

We’ll need the definition of conditional probability to define a martingale, but this turns out to be surprisingly tricky. Let’s consider the following simple example to see why.

Example 41.3.1 (Why high-school methods aren’t enough here)
Suppose we have two independent random variables X, Y distributed uniformly over [0,1] (so we may as well take Ω = [0,1]2). We might try to ask the question:

“what is the expected value of X given that X + Y = 0.6”?

Intuitively, we know the answer has to be 0.3. However, if we try to write down a definition, we quickly run into trouble. Ideally we want to say something like

                          ∫
                           S X
𝔼 [X  given X + Y = 0.6] = ∫--1-where S = {ω ∈ Ω | X (ω) + Y(ω ) = 0.6}.
                           S

The problem is that S is a set of measure zero, so we quickly run into 0
0, meaning a definition of this shape will not work out.

The way that this is typically handled in measure theory is to use the notion of sub-σ-algebra that we defined. Let be a sub-σ-algebra which captures the information This means we create a function assigning the “conditional expectation” to every point ω Ω, which is measurable with respect to .

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

Proposition 41.3.2 (Conditional expectation definition)
Let X : Ω be an absolutely integrable random variable (meaning 𝔼[|X|] < ) over a probability space Ω, and let be a sub-σ-algebra on it.

Then there exists a function η: Ω satisfying the following two properties:

Moreover, this random variable is unique up to almost sureness.

Proof. Omitted, but relevant buzzword used is “Radon-Nikodym derivative”. □

Definition 41.3.3. Let η be as in the previous proposition.

More fine print:

Remark 41.3.4 (This notation is terrible) The notation 𝔼(X) is admittedly confusing, since it is actually an entire function Ω , rather than just a real number like 𝔼[X]. For this reason I try to be careful to remember to use parentheses rather than square brackets for conditional expectations; not everyone does this.

Abuse of Notation 41.3.5. In addition, when we write Y = 𝔼(X), there is some abuse of notation happening here since 𝔼(X) is defined only up to some reasonable uniqueness (i.e. up to measure zero changes). So this really means that “Y satisfies the hypothesis of ?? ”, but this is so pedantic that no one bothers.

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

41.4  Supermartingales

Prototypical example for this section: Visiting a casino is a supermartingale, assuming house odds.

Definition 41.4.1. Let X0, X1, …be a sequence of random variables on a probability space Ω, and let 0 1 ⋅⋅⋅ be a filtration.

Then (Xn)n0 is a supermartingale with respect to ( n)n0 if the following conditions hold:

In a submartingale the inequality is replaced with , and in a martingale it is replaced by =.

Abuse of Notation 41.4.2 (No one uses that filtration thing anyways). We will always take n to be the σ-algebra generated by the previous variables X0, X1, …, Xn1, and do so without further comment. Nonetheless, all the results that follow hold in the more general setting of a supermartingale with respect to some filtration.

We will prove all our theorems for supermartingales; the analogous versions for submartingales can be obtained by replacing with everywhere (since Xn is a martingale iff Xn is a supermartingale) and for martingales by replacing with = everywhere (since Xn is a martingale iff it is both a supermartingale and a submartingale).

Let’s give examples.

Example 41.4.3 (Supermartingales)

(a)
Random walks: an ant starts at the position 0 on the number line. Every minute, it flips a fair coin and either walks one step left or one step right. If Xt is the position at the tth time, then Xt is a martingale, because
𝔼(X  | X ,...,X    ) = (Xt-−1 +-1-)+-(Xt−1-−-1)= X   .
    t   0      t−1              2               t−1
(b)
Casino game: Consider a gambler using the strategy described at the beginning of the chapter. This is a martingale, since every bet the gambler makes has expected value 0.
(c)
Multiplying independent variables: Let X1, X2, …, be independent (not necessarily identically distributed) integrable random variables with mean 1. Then the sequence Y 1, Y 2 …defined by
Yn := X1X2 ...Xn

is a martingale; as 𝔼(Y nY 1,,Y n1) = 𝔼[Y n] Y n1 = Y n1.

(d)
Iterated blackjack: Suppose one shows up to a casino and plays infinitely many games of blackjack. If Xt is their wealth at time t, then Xt is a supermartingale. This is because each game has negative expected value (house edge).

Example 41.4.4 (Frivolous/inflamamtory example — real life is a supermartingale)

Let Xt be your happiness on day t of your life. Life has its ups and downs, so it is not the case that Xt Xt1 for every t. For example, you might win the lottery one day.

However, on any given day, many things can go wrong (e.g. zombie apocalypse), and by Murphy’s Law this is more likely than things going well. Also, as you get older, you have an increasing number of responsibilities and your health gradually begins to deteriorate.

Thus it seems that

𝔼(Xt | X0,...,Xt −1) ≤ Xt −1

is a reasonable description of the future — in expectation, each successive day is slightly worse than the previous one. (In particular, if we set Xt = −∞ on death, then as long as you have a positive probability of dying, the displayed inequality is obviously true.)

Before going on, we will state without proof one useful result: if a martingale is bounded, then it will almost certainly converge.

Theorem 41.4.5 (Doob’s martingale convergence theorem)
Let X0, …be a supermartingale on a probability space Ω such that

 sup 𝔼 [|X  |] < ∞.
n∈ℤ≥0     n

Then, there exists a random variable X: Ω such that

X  −a−.s→.X   .
  n      ∞

41.5  Optional stopping

Prototypical example for this section: Las Vegas.

In the first section we described how to make money almost surely. The key advantage the gambler had was the ability to quit whenever he wanted (equivalently, an ability to control the size of the bets; betting $0 forever is the same as quitting.) Let’s formalize a notion of stopping time.

The idea is we want to define a function τ : Ω →{0,1,,∞} such that

Here’s the compiled machine code.

Definition 41.5.1. Let 0 1 ⋅⋅⋅ be a filtration on a probability space Ω.

Proposition 41.5.2 (Stopped supermartingales are still supermartingales)
Let X0, X1, …be a supermartingale. Then the sequence

X τ∧0, X τ∧1, ...

is itself a supermartingale.

Proof. We have almost everywhere the inequalities

𝔼(X τ∧n | ℱn −1) = 𝔼(                                  )
 Xn− 1 + 1τ(ω)=n −1(Xn − Xn −1) | ℱn − 1
= 𝔼(Xn −1 | ℱn −1) + 𝔼(                             )
 1τ(ω)=n−1 ⋅(Xn − Xn− 1) | ℱn −1
= Xn1 + 1τ(ω)=n1 𝔼(Xn  − Xn− 1 | ℱn −1) Xn1

as functions from Ω . □

Theorem 41.5.3 (Doob’s optional stopping theorem)
Let X0, X1, …be a supermartingale on a probability space Ω, with respect to a filtration 0 1 ⋅⋅⋅. Let τ be a stopping time with respect to this filtration. Suppose that any of the following hypotheses are true, for some constant C:

(a)
Finite time: τ(ω) C for almost all ω.
(b)
Finite money: for each n 1, |X τ∧n(ω)|C for almost all ω.
(c)
Finite bets: we have 𝔼[τ] < , and for each n 1, the conditional expectation
𝔼 (|Xn − Xn −1| | ℱn )

takes on values at most C for almost all ω Ω satisfying τ(ω) n.

Then Xτ is well-defined almost everywhere, and more importantly,

𝔼[Xτ] ≤ 𝔼[X0].

The last equation can be cheekily expressed as “the only winning move is not to play”.

Proof.

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

Exercise 41.5.4. Conclude that going to Las Vegas with the strategy described in the first section is a really bad idea. What goes wrong?

41.6  Fun applications of optional stopping (TO DO)

We now give three problems which showcase some of the power of the results we have developed so far.

41.6.i  The ballot problem

Suppose Alice and Bob are racing in an election; Alice received a votes total while Bob received b votes total, and a > b. If the votes are chosen in random order, one could ask: what is the probability that Alice never falls behind Bob in the election?

SVG-Viewer needed.

Proposition 41.6.1 (Ballot problem)
This occurs with probability a−b
a+b.

41.6.ii  ABRACADABRA

41.6.iii  USA TST 2018

41.7  A few harder problems to think about

Problem 41A (Examples of martingales). We give some more examples of martingales.

(a)
(Simple random walk) Let X1, X2, …be i.i.d. random variables which equal +1 with probability 12, and 1 with probability 12. Prove that
Yn =  (X1 +  ⋅⋅⋅+  Xn)2 − n

is a martingale.

(b)
(de Moivre’s martingale) Fix real numbers p and q such that p,q > 0 and p + q = 1. Let X1, X2, …be i.i.d. random variables which equal +1 with probability p, and 1 with probability q. Show that
     (    )
Yn =  qp− 1X1+X2+ ⋅⋅⋅+Xn

is a martingale.

(c)
(Pólya’s urn) An urn contains one red and one blue marble initially. Every minute, a marble is randomly removed from the urn, and two more marbles of the same color are added to the urn. Thus after n minutes, the urn will have n + 2 marbles.

Let rn denote the fraction of marbles which are red. Show that rn is a martingale.

Problem 41B. A deck has 52 cards; of them 26 are red and 26 are black. The cards are drawn and revealed one at a time. At any point, if there is at least one card remaining in the deck, you may stop the dealer; you win if (and only if) the next card in the deck is red. If all cards are dealt, then you lose. Across all possible strategies, determine the maximal probability of winning.

Problem 41C (Wald’s identity). Let μ be a real number. Let X1, X2, …be independent random variables on a probability space Ω with mean μ. Finally let τ : Ω →{1,2,} be a stopping time such that 𝔼[τ] < , such that the event τ = n depends only on X1, …, Xn.

Prove that

𝔼 [X1 + X2 +  ⋅⋅⋅+  Xτ] = μ𝔼[τ].

Problem 41D (Unbiased drunkard’s walk). An ant starts at 0 on a number line, and walks left or right one unit with probability 12. It stops once it reaches either 17 or +8.

(a)
Find the probability it reaches +8 before 17.
(b)
Find the expected value of the amount of time it takes to reach either endpoint.

Problem 41E (Biased drunkard’s walk). Let 0 < p < 1 be a real number. An ant starts at 0 on a number line, and walks left or right one unit with probability p. It stops once it reaches either 17 or +8. Find the probability it reaches +8 first.

Problem 41F. The number 1 is written on a blackboard. Every minute, if the number a is written on the board, it’s erased and replaced by a real number in the interval [0,2.01a] selected uniformly at random. What is the probability that the resulting sequence of numbers approaches 0?

Part XII
Differential Geometry

42  Multivariable calculus done correctly

As I have ranted about before, linear algebra is done wrong by the extensive use of matrices to obscure the structure of a linear map. Similar problems occur with multivariable calculus, so here I would like to set the record straight.

Since we are doing this chapter using morally correct linear algebra, it’s imperative you’re comfortable with linear maps, and in particular the dual space V which we will repeatedly use.

In this chapter, all vector spaces have norms and are finite-dimensional over . So in particular every vector space is also a metric space (with metric given by the norm), and we can talk about open sets as usual.

42.1  The total derivative

Prototypical example for this section: If f(x,y) = x2 + y2, then (Df)(x,y) = 2xe1 + 2ye2.

First, let f : [a,b] . You might recall from high school calculus that for every point p , we defined f(p) as the derivative at the point p (if it existed), which we interpreted as the slope of the “tangent line”.

That’s fine, but I claim that the “better” way to interpret the derivative at that point is as a linear map, that is, as a function. If f(p) = 1.5, then the derivative tells me that if I move 𝜀 away from p then I should expect f to change by about 1.5𝜀. In other words,

The derivative of f at p approximates f near p by a linear function.

What about more generally? Suppose I have a function like f : 2 , say

f(x,y) = x2 + y2

for concreteness or something. For a point p 2, the “derivative” of f at p ought to represent a linear map that approximates f at that point p. That means I want a linear map T : 2 such that

f(p + v) ≈ f (p)+ T(v)

for small displacements v 2.

Even more generally, if f : U W with U V open (in the ∥∙∥V metric as usual), then the derivative at p U ought to be so that

f(p+ v) ≈ f(p)+ T (v) ∈ W.

(We need U open so that for small enough v, p + v U as well.) In fact this is exactly what we were doing earlier with f(p) in high school.

PIC
Image derived from [?]

The only difference is that, by an unfortunate coincidence, a linear map can be represented by just its slope. And in the unending quest to make everything a number so that it can be AP tested, we immediately forgot all about what we were trying to do in the first place and just defined the derivative of f to be a number instead of a function.

The fundamental idea of Calculus is the local approximation of functions by linear functions. The derivative does exactly this.

Jean Dieudonné as quoted in [?] continues:

In the classical teaching of Calculus, this idea is immediately obscured by the accidental fact that, on a one-dimensional vector space, there is a one-to-one correspondence between linear forms and numbers, and therefore the derivative at a point is defined as a number instead of a linear form. This slavish subservience to the shibboleth of numerical interpretation at any cost becomes much worse . . .

So let’s do this right. The only thing that we have to do is say what “” means, and for this we use the norm of the vector space.

Definition 42.1.1. Let U V be open. Let f : U W be a continuous function, and p U. Suppose there exists a linear map T : V W such that

      ∥f-(p-+-v)−-f-(p)−-T(v)∥W-
∥vl∥imV→0           ∥v ∥V           = 0.

Then T is the total derivative of f at p. We denote this by (Df)p, and say f is differentiable at p.

If (Df)p exists at every point, we say f is differentiable.

Question 42.1.2. Check if that V = W = , this is equivalent to the single-variable definition. (What are the linear maps from V to W?)

Example 42.1.3 (Total derivative of f(x,y) = x2 + y2)
Let V = 2 with standard basis e1, e2 and let W = , and let f(xe1 + ye2) = x2 + y2. Let p = ae1 + be2. Then, we claim that

(Df )p : ℝ2 → ℝ  by  v ↦→ 2a ⋅e∨1(v)+ 2b ⋅e∨2(v).

Here, the notation e1 and e2 makes sense, because by definition (Df)p V : these are functions from V to !

Let’s check this manually with the limit definition. Set v = xe1 + ye2, and note that the norm on V is ∥ (x,y )∥V =   -------
∘ x2 + y2 while the norm on W is just the absolute value ∥c∥W = |c|. Then we compute

∥f (p+ v) − f(p)− T (v)∥
-----------------------W-
           ∥v∥V = ||(a + x)2 + (b+ y)2 − (a2 + b2)− (2ax + 2by)||
-----------------∘--2----2-----------------
                   x  + y
= -x2-+-y2-
∘x2--+-y2- = ∘ -2----2
  x  + y
0

as ∥v∥0. Thus, for p = ae1 + be2 we indeed have (Df)p = 2a e1 + 2b e2.

Remark 42.1.4 — As usual, differentiability implies continuity.

Remark 42.1.5 — Although U V , it might be helpful to think of vectors from U and V as different types of objects (in particular, note that it’s possible for 0V ∈∕U). The vectors in U are “inputs” on our space while the vectors coming from V are “small displacements”. For this reason, I deliberately try to use p U and v V when possible.

42.2  The projection principle

Before proceeding I need to say something really important.

Theorem 42.2.1 (Projection principle)
Let U be an open subset of the vector space V . Let W be an n-dimensional real vector space with basis w1,,wn. Then there is a bijection between continuous functions f : U W and n-tuples of continuous f1,f2,,fn : U by projection onto the ith basis element, i.e. 

f(v) = f1(v)w1 + ⋅⋅⋅+ fn(v)wn.

Proof. Obvious. □

The theorem remains true if one replaces “continuous” by “differentiable”, “smooth”, “arbitrary”, or most other reasonable words. Translation:

To think about a function f : U n, it suffices to think about each coordinate separately.

For this reason, we’ll most often be interested in functions f : U . That’s why the dual space V is so important.

42.3  Total and partial derivatives

Prototypical example for this section: If f(x,y) = x2 + y2, then (Df) : (x,y)↦→2xe1 + 2y e2, and ∂f∂x- = 2x, ∂∂fy = 2y.

Let U V be open and let V have a basis e1, …, en. Suppose f : U is a function which is differentiable everywhere, meaning (Df)p V exists for every p. In that case, one can consider Df as itself a function:

Df : U V
p ↦→(Df)p.

This is a little crazy: to every point in U we associate a function in V . We say Df is the total derivative of f, to reflect how much information we’re dealing with. We say (Df)p is the total derivative at p.

Let’s apply the projection principle now to Df. Since we picked a basis e1, …, en of V , there is a corresponding dual basis e1, e2, …, en. The Projection Principle tells us that Df can thus be thought of as just n functions, so we can write

         ∨           ∨
Df  = ψ1e1 + ⋅⋅⋅+ ψnen.

In fact, we can even describe what the ψi are.

Definition 42.3.1. The ith partial derivative of f : U , denoted

∂f- : U → ℝ
∂ei

is defined by

 ∂f          f(p + tei)− f (p)
--- (p) := lim ----------------.
∂ei      t→0        t

You can think of it as “falong ei”.

Question 42.3.2. Check that if Df exists, then

(Df )p(ei) = ∂f-(p).
           ∂ei

Remark 42.3.3 — Of course you can write down a definition of ∂f
∂v for any v (rather than just the ei).

From the above remarks, we can derive that

|----------------------------|
|     -∂f-  ∨        -∂f-  ∨ |
Df  = ∂e  ⋅e1 + ⋅⋅⋅+ ∂e  ⋅en.|
---------1-------------n------

and so given a basis of V , we can think of Df as just the n partials.

Remark 42.3.4 — Keep in mind that each ∂f-
∂ei is a function from U to the reals. That is to say,

(Df )p = ∂f-(p)⋅e∨+  ⋅⋅⋅+  ∂f-(p)⋅e∨ ∈ V∨.
         ∂e1 --  1        ∂en --  n
         ◟ ◝∈◜ℝ ◞           ◟ ◝∈◜ℝ ◞

Example 42.3.5 (Partial derivatives of f(x,y) = x2 + y2)
Let f : 2 by (x,y)↦→x2 + y2. Then in our new language,

                 ∨        ∨
Df : (x,y ) ↦→ 2x⋅e1 + 2y ⋅e2.

Thus the partials are

∂f                         ∂f
∂x-: (x, y) ↦→ 2x ∈ ℝ  and   ∂y-: (x,y) ↦→ 2y ∈ ℝ

With all that said, I haven’t really said much about how to find the total derivative itself. For example, if I told you

f(x,y) = x siny + x2y4

you might want to be able to compute Df without going through that horrible limit definition I told you about earlier.

Fortunately, it turns out you already know how to compute partial derivatives, because you had to take AP Calculus at some point in your life. It turns out for most reasonable functions, this is all you’ll ever need.

Theorem 42.3.6 (Continuous partials implies differentiable)
Let U V be open and pick any basis e1,,en. Let f : U and suppose that ∂∂fe-
  i is defined for each i and moreover is continuous. Then f is differentiable and Df is given by

      ∑n  ∂f   ∨
Df  =     ∂e-⋅ei .
      i=1   i

Proof. Not going to write out the details, but…given v = t1e1 + ⋅⋅⋅ + tnen, the idea is to just walk from p to p + t1e1, p + t1e1 + t2e2, …, up to p + t1e1 + t2e2 + ⋅⋅⋅ + tnen = p + v, picking up the partial derivatives on the way. Do some calculation. □

Remark 42.3.7 — The continuous condition cannot be dropped. The function

         { -xy--
f (x,y ) =  x2+y2  (x,y) ⁄= (0,0)
           0      (x,y) = (0,0).

is the classic counterexample – the total derivative Df does not exist at zero, even though both partials do.

Example 42.3.8 (Actually computing a total derivative)
Let f(x,y) = xsiny + x2y4. Then

∂f-
 ∂x(x,y) = siny + y4 2x
∂f
---
 ∂y(x,y) = xcosy + x2 4y3.

So ??  applies, and Df = ∂f
∂xe1 + ∂f
∂ye2, which I won’t bother to write out.

The example f(x,y) = x2 + y2 is the same thing. That being said, who cares about xsiny + x2y4 anyways?

42.4  (Optional) A word on higher derivatives

Let U V be open, and take f : U W, so that Df : U Hom(V,W).

Well, Hom(V,W) can also be thought of as a normed vector space in its own right: it turns out that one can define an operator norm on it by setting

          {                  }
            ∥T-(v)∥W-
∥T ∥ := sup   ∥v∥    | v ⁄= 0V  .
                 V

So Hom(V,W) can be thought of as a normed vector space as well. Thus it makes sense to write

D (Df ) : U → Hom (V,Hom  (V, W ))

which we abbreviate as D2f. Dropping all doubt and plunging on,

D3f : U → Hom  (V, Hom (V,Hom (V,W  ))).

I’m sorry. As consolation, we at least know that Hom(V,W)∼=V W in a natural way, so we can at least condense this to

Dkf : V → (V ∨)⊗k ⊗ W

rather than writing a bunch of Hom’s.

Remark 42.4.1 — If k = 2, W = , then D2f(v) (V )2, so it can be represented as an n × n matrix, which for some reason is called a Hessian.

The most important property of the second derivative is that

Theorem 42.4.2 (Symmetry of D2f)
Let f : U W with U V open. If (D2f)p exists at some p U, then it is symmetric, meaning

(D2f )p(v1,v2) = (D2f )p(v2,v1).

I’ll just quote this without proof (see e.g. [?, §5, theorem 16]), because double derivatives make my head spin. An important corollary of this theorem:

Corollary 42.4.3 (Clairaut’s theorem: mixed partials are symmetric)
Let f : U with U V open be twice differentiable. Then for any point p such that the quantities are defined,

∂---∂--      -∂--∂-
∂ei∂ejf(p) = ∂ej∂ei f(p).

42.5  Towards differential forms

This concludes the exposition of what the derivative really is: the key idea I want to communicate in this chapter is that Df should be thought of as a map from U V .

The next natural thing to do is talk about integration. The correct way to do this is through a so-called differential form: you’ll finally know what all those stupid dx’s and dy’s really mean. (They weren’t just there for decoration!)

42.6  A few harder problems to think about

Problem 42A (Chain rule). Let U1−→fU2−→gU3 be differentiable maps between open sets of normed vector spaces V i, and let h = g f. Prove the Chain Rule: for any point p U1, we have

(Dh )p = (Dg )f(p) ∘ (Df )p.

Problem 42B. Let U V be open, and f : U be differentiable k times. Show that (Dkf)p is symmetric in its k arguments, meaning for any v1,,vk V and any permutation σ on {1,...,k} we have

(Dkf )p(v1,...,v ) = (Dkf )p(v   ,...,v   ).
               k            σ(1)      σ(k)

43  Differential forms

In this chapter, all vector spaces are finite-dimensional real inner product spaces. We first start by (non-rigorously) drawing pictures of all the things that we will define in this chapter. Then we re-do everything again in its proper algebraic context.

43.1  Pictures of differential forms

Before defining a differential form, we first draw some pictures. The key thing to keep in mind is

“The definition of a differential form is: something you can integrate.”
— Joe Harris

We’ll assume that all functions are smooth, i.e. infinitely differentiable.

Let U V be an open set of a vector space V . Suppose that we have a function f : U , i.e. we assign a value to every point of U.

Definition 43.1.1. A 0-form f on U is just a smooth function f : U .

Thus, if we specify a finite set S of points in U we can “integrate” over S by just adding up the values of the points:

0 + √2-+ 3 + (− 1) = 2+ √2.

So, a 0-form f lets us integrate over 0-dimensional “cells”.

But this is quite boring, because as we know we like to integrate over things like curves, not single points. So, by analogy, we want a 1-form to let us integrate over 1-dimensional cells: i.e. over curves. What information would we need to do that? To answer this, let’s draw a picture of a curve c, which can be thought of as a function c : [0,1] U.

We might think that we could get away with just specifying a number on every point of U (i.e. a 0-form f), and then somehow “add up” all the values of f along the curve. We’ll use this idea in a moment, but we can in fact do something more general. Notice how when we walk along a smooth curve, at every point p we also have some extra information: a tangent vector v. So, we can define a 1-form α as follows. A 0-form just took a point and gave a real number, but a 1-form will take both a point and a tangent vector at that point, and spit out a real number. So a 1-form α is a smooth function on pairs (p,v), where v is a tangent vector at p, to . Hence

α : U × V → ℝ.

Actually, for any point p, we will require that α(p,) is a linear function in terms of the vectors: i.e. we want for example that α(p,2v) = 2α(p,v). So it is more customary to think of α as:

Definition 43.1.2. A 1-form α is a smooth function

α : U → V ∨.

Like with Df, we’ll use αp instead of α(p). So, at every point p, αp is some linear functional that eats tangent vectors at p, and spits out a real number. Thus, we think of αp as an element of V ;

α ∈ V ∨.
 p

Next, we draw pictures of 2-forms. This should, for example, let us integrate over a blob (a so-called 2-cell) of the form

c : [0,1]× [0,1] → U

i.e. for example, a square in U. In the previous example with 1-forms, we looked at tangent vectors to the curve c. This time, at points we will look at pairs of tangent vectors in U: in the same sense that lots of tangent vectors approximate the entire curve, lots of tiny squares will approximate the big square in U.

So what should a 2-form β be? As before, it should start by taking a point p U, so βp is now a linear functional: but this time, it should be a linear map on two vectors v and w. Here v and w are not tangent so much as their span cuts out a small parallelogram. So, the right thing to do is in fact consider

βp ∈ V∨ ∧ V ∨.

That is, to use the wedge product to get a handle on the idea that v and w span a parallelogram. Another valid choice would have been (V V ); in fact, the two are isomorphic, but it will be more convenient to write it in the former.

43.2  Pictures of exterior derivatives

Next question:

How can we build a 1-form from a 0-form?

Let f be a 0-form on U; thus, we have a function f : U . Then in fact there is a very natural 1-form on U arising from f, appropriately called df. Namely, given a point p and a tangent vector v, the differential form (df)p returns the change in f along v. In other words, it’s just the total derivative (Df)p(v).

Thus, df measures “the change in f”.

Now, even if I haven’t defined integration yet, given a curve c from a point a to b, what do you think

∫

 cdf

should be equal to? Remember that df is the 1-form that measures “infinitesimal change in f”. So if we add up all the change in f along a path from a to b, then the answer we get should just be

∫
  df = f(b)− f(a).
 c

This is the first case of something we call Stokes’ theorem.

Generalizing, how should we get from a 1-form to a 2-form? At each point p, the 2-form β gives a βp which takes in a “parallelogram” and returns a real number. Now suppose we have a 1-form α. Then along each of the edges of a parallelogram, with an appropriate sign convention the 1-form α gives us a real number. So, given a 1-form α, we define to be the 2-form that takes in a parallelogram spanned by v and w, and returns the measure of α along the boundary.

Now, what happens if you integrate df along the entire square c? The right picture is that, if we think of each little square as making up the big square, then the adjacent boundaries cancel out, and all we are left is the main boundary. This is again just a case of the so-called Stokes’ theorem.

    

PIC
Image from [?]

43.3  Differential forms

Prototypical example for this section: Algebraically, something that looks like fe1e2 + , and geometrically, see the previous section.

Let’s now get a handle on what dx means. Fix a real vector space V of dimension n, and let e1, …, en be a standard basis. Let U be an open set.

Definition 43.3.1. We define a differential k-form α on U to be a smooth (infinitely differentiable) map α : U Λk(V ). (Here Λk(V ) is the wedge product.)

Like with Df, we’ll use αp instead of α(p).

Example 43.3.2 (k-forms for k = 0,1)

(a)
A 0-form is just a function U .
(b)
A 1-form is a function U V . For example, the total derivative Df of a function V is a 1-form.
(c)
Let V = 3 with standard basis e1, e2, e3. Then a typical 2-form is given by
αp = f(p) ⋅e∨1 ∧ e∨2 + g(p)⋅e∨1 ∧ e∨3 + h(p)⋅e∨2 ∧ e∨3 ∈ Λ2 (V)

where f,g,h : V are smooth functions.

Now, by the projection principle (?? ) we only have to specify a function on each of ( )
 nk basis elements of Λk(V ). So, take any basis {ei} of V , and take the usual basis for Λk(V ) of elements

 ∨    ∨        ∨
ei1 ∧ ei2 ∧⋅⋅⋅∧ eik.

Thus, a general k-form takes the shape

         ∑
αp =            fi,...,i(p)⋅e∨ ∧ e∨ ∧ ⋅⋅⋅∧ e∨.
     1≤i<⋅⋅⋅<i≤n   1  k     i1   i2        ik
       1     k

Since this is a huge nuisance to write, we will abbreviate this to just

    ∑
α =    fI ⋅deI
     I

where we understand the sum runs over I = (i1,,ik), and deI represents ei1⋅⋅⋅eik.

Now that we have an element Λk(V ), what can it do? Well, first let me get the definition on the table, then tell you what it’s doing.

Definition 43.3.3. For linear functions ξ1,k V and vectors v1,,vk V , set

                             ⌊                 ⌋
                              ξ1(v1) ...  ξ1(vk)
(ξ1 ∧⋅⋅⋅∧ ξk)(v1,...,vk) := det |⌈  ...     ...    ...  |⌉ .

                              ξk(v1) ...  ξk(vk)

You can check that this is well-defined under e.g. v w = w v and so on.

Example 43.3.4 (Evaluation of a differential form)
Set V = 3. Suppose that at some point p, the 2-form α returns

αp = 2e∨1 ∧ e∨2 + e∨1 ∧ e∨3 .

Let v1 = 3e1 + e2 + 4e3 and v2 = 8e1 + 9e2 + 5e3. Then

                [    ]      [    ]
                 3  8        3  8
αp(v1,v2) = 2 det 1 9  + det 4  5  = 21.

What does this definition mean? One way to say it is that

If I walk to a point p U, a k-form α will take in k vectors v1,,vk and spit out a number, which is to be interpreted as a (signed) volume.

Picture:

In other words, at every point p, we get a function αp. Then I can feed in k vectors to αp and get a number, which I interpret as a signed volume of the parallelpiped spanned by the {vi}’s in some way (e.g. the flux of a force field). That’s why αp as a “function” is contrived to lie in the wedge product: this ensures that the notion of “volume” makes sense, so that for example, the equality αp(v1,v2) = αp(v2,v1) holds.

This is what makes differential forms so fit for integration.

43.4  Exterior derivatives

Prototypical example for this section: Possibly dx1 = e1.

We now define the exterior derivative df that we gave pictures of at the beginning of the section. It turns out that the exterior derivative is easy to compute given explicit coordinates to work with.

First, given a function f : U , we define

           ∑  ∂fi
df := Df  =    ---e∨i
            i ∂ei

In particular, suppose V = n and f(x1,,xn) = x1 (i.e. f = e1). Then:

Question 43.4.1. Show that for any p U,

   ∨      ∨
(d(e1))p = e1.

Abuse of Notation 43.4.2. Unfortunately, someone somewhere decided it would be a good idea to use “x1” to denote e1 (because obviously1 x1 means “the function that takes (x1,,xn) n to x1”) and then decided that

dx  := e ∨.
  1     1

This notation is so entrenched that I have no choice but to grudgingly accept it. Note that it’s not even right, since technically it’s (dx1)p = e1; dx1 is a 1-form.

Remark 43.4.3 — This is the reason why we use the notation ddfx in calculus now: given, say, f : by f(x) = x2, it is indeed true that

df = 2x⋅e∨ = 2x ⋅dx
         1

and so by (more) abuse of notation we write df∕dx = 2x.

More generally, we can define the exterior derivative in terms of our basis e1, …, en as follows: if α = IfIdeI then we set

      ∑             ∑  ∑   ∂fI
dα :=    dfI ∧ deI =       ∂e-dej ∧ deI.
       I             I  j    j

This doesn’t depend on the choice of basis.

Example 43.4.4 (Computing some exterior derivatives)
Let V = 3 with standard basis e1, e2, e3. Let f(x,y,z) = x4 + y3 + 2xz. Then we compute

              3             2
df = Df =  (4x  + 2z) dx+ 3y  dy + 2x dz.

Next, we can evaluate d(df) as prescribed: it is

d2f = (12x2 dx + 2dz) dx + (6y dy) dy + 2(dx dz)
= 12x2(dx dx) + 2(dz dx) + 6y(dy dy) + 2(dx dz)
= 2(dz dx) + 2(dx dz)
= 0.

So surprisingly, d2f is the zero map. Here, we have exploited ??  for the first time, in writing dx, dy, dz.

And in fact, this is always true in general:

Theorem 43.4.5 (Exterior derivative vanishes)
Let α be any k-form. Then d2(α) = 0. Even more succinctly,

d2 = 0.

The proof is left as ?? .

Exercise 43.4.6. Compare the statement d2 = 0 to the geometric picture of a 2-form given at the beginning of this chapter. Why does this intuitively make sense?

Here are some other properties of d:

In fact, one can show that df as defined above is the unique map sending k-forms to (k + 1)-forms with these properties. So, one way to define df is to take as axioms the bulleted properties above and then declare d to be the unique solution to this functional equation. In any case, this tells us that our definition of d does not depend on the basis chosen.

Recall that df measures the change in boundary. In that sense, d2 = 0 is saying something like “the boundary of the boundary is empty”. We’ll make this precise when we see Stokes’ theorem in the next chapter.

43.5  Closed and exact forms

Let α be a k-form.

Definition 43.5.1. We say α is closed if = 0.

Definition 43.5.2. We say α is exact if for some (k 1)-form β, = α. If k = 0, α is exact only when α = 0.

Question 43.5.3. Show that exact forms are closed.

A natural question arises: are there closed forms which are not exact? Surprisingly, the answer to this question is tied to topology. Here is one important example.

Example 43.5.4 (The angle form)
Let U = 2 ∖{0}, and let 𝜃(p) be the angle formed by the x-axis and the line from the origin to p.

The 1-form α : U (2) defined by

     − y-dx-+-x-dy
α =    x2 + y2

is called the angle form: given p U it measures the change in angle 𝜃(p) along a tangent vector. So intuitively, “α = d𝜃”. Indeed, one can check directly that the angle form is closed.

However, α is not exact: there is no global smooth function 𝜃 : U having α as a derivative. This reflects the fact that one can actually perform a full 2π rotation around the origin, i.e. 𝜃 only makes sense mod 2π. Thus existence of the angle form α reflects the possibility of “winding” around the origin.

So the key idea is that the failure of a closed form to be exact corresponds quite well with “holes” in the space: the same information that homotopy and homology groups are trying to capture. To draw another analogy, in complex analysis Cauchy-Goursat only works when U is simply connected. The “hole” in U is being detected by the existence of a form α. The so-called de Rham cohomology will make this relation explicit.

43.6  A few harder problems to think about

Problem 43A. Show directly that the angle form

     − y-dx-+-x-dy
α =    x2 + y2

is closed.

Problem 43B. Establish ?? , which states that d2 = 0.

44  Integrating differential forms

We now show how to integrate differential forms over cells, and state Stokes’ theorem in this context. In this chapter, all vector spaces are finite-dimensional and real.

44.1  Motivation: line integrals

Given a function g : [a,b] , we know by the fundamental theorem of calculus that

∫
    g (t) dt = f(b)− f (a )
 [a,b]

where f is a function such that g = df∕dt. Equivalently, for f : [a,b] ,

∫         ∫
    g dt =     df = f(b) − f(a)
 [a,b]       [a,b]

where df is the exterior derivative we defined earlier.

Cool, so we can integrate over [a,b]. Now suppose more generally, we have U an open subset of our real vector space V and a 1-form α : U V . We consider a parametrized curve, which is a smooth function c : [a,b] U. Picture:

We want to define an cα such that:

The integral cα should add up all the α along the curve c.

Our differential form α first takes in a point p to get αp V . Then, it eats a tangent vector v V to the curve c to finally give a real number αp(v) . We would like to “add all these numbers up”, using only the notion of an integral over [a,b].

Exercise 44.1.1. Try to guess what the definition of the integral should be. (By type-checking, there’s only one reasonable answer.)

So, the definition we give is

∫      ∫
                ( ′  )
 c α :=  [a,b]αc(t) c(t)  dt.

Here, c(t) is shorthand for (Dc)c(t)(1). It represents the tangent vector to the curve c at the point p = c(t), at time t. (Here we are taking advantage of the fact that [a,b] is one-dimensional.)

Now that definition was a pain to write, so we will define a differential 1-form cα on [a,b] to swallow that entire thing: specifically, in this case we define cα to be

(c∗α )(𝜀) = α   ⋅(Dc )(𝜀)
    t       c(t)      t

(here 𝜀 is some displacement in time). Thus, we can more succinctly write

∫      ∫
            ∗
 cα :=  [a,b]c α.

This is a special case of a pullback: roughly, if ϕ : U U(where U V , U′⊆ V ), we can change any differential k-form α on Uto a k-form on U. In particular, if U = [a,b],1 we can resort to our old definition of an integral. Let’s now do this in full generality.

44.2  Pullbacks

Let V and V be finite dimensional real vector spaces (possibly different dimensions) and suppose U and Uare open subsets of each; next, consider a k-form α on U.

Given a map ϕ : U Uwe now want to define a pullback in much the same way as before. Picture:

Well, there’s a total of about one thing we can do. Specifically: α accepts a point in Uand k tangent vectors in V , and returns a real number. We want ϕα to accept a point in p U and k tangent vectors v1,,vk in V , and feed the corresponding information to α.

Clearly we give the point q = ϕ(p). As for the tangent vectors, since we are interested in volume, we take the derivative of ϕ at p, ()p, which will scale each of our vectors vi into some vector in the target V . To cut a long story short:

Definition 44.2.1. Given ϕ : U Uand α a k-form, we define the pullback

  ∗
(ϕ α)p(v1,...,vk) := α ϕ(p)((Dϕ )p(v1),...,(D ϕ)p(vk)).

There is a more concrete way to define the pullback using bases. Suppose w1,,wn is a basis of V and e1,,em is a basis of V . Thus, by the projection principle (?? ) the map ϕ : V V can be thought of as

ϕ(v) = ϕ (v)w  + ...ϕ (v)w
        1    1       n    n

where each ϕi takes in a v V and returns a real number. We know also that α can be written concretely as

      ∑
α =         fJwJ .
    J⊆{1,...,n}

Then, we define

         ∑
ϕ∗α =         (fI ∘ϕ)(D ϕi ∧ ⋅⋅⋅∧D ϕi ).
      I⊆{1,...,m }           1          k

A diligent reader can check these definitions are equivalent.

Example 44.2.2 (Computation of a pullback)
Let V = 2 with basis e1 and e2, and suppose ϕ : V V is given by sending

ϕ(ae1 + be2) = (a2 + b2)w1 + log(a2 + 1)w2 + b3w3

where w1, w2, w3 is a basis for V . Consider the form αq = f(q)w1w3, where f : V ′→ . Then

(ϕ∗α )p = f(ϕ(p))⋅(2ae∨1 + 2be∨2) ∧(3b2e∨2) = f(ϕ(p))⋅6ab2 ⋅e∨1 ∧ e∨2 .

It turns out that the pullback basically behaves nicely as possible, e.g. 

but I won’t take the time to check these here (one can verify them all by expanding with a basis).

44.3  Cells

Prototypical example for this section: A disk in 2 can be thought of as the cell [0,R] × [0,2π] 2 by (r,𝜃)↦→(r cos𝜃)e1 + (r sin𝜃)e2.

Now that we have the notion of a pullback, we can define the notion of an integral for more general spaces. Specifically, to generalize the notion of integrals we had before:

Definition 44.3.1. A k-cell is a smooth function c : [a1,b1] × [a2,b2] ×[ak,bk] V .

Example 44.3.2 (Examples of cells)
Let V = 2 for convenience.

(a)
A 0-cell consists of a single point.
(b)
As we saw, a 1-cell is an arbitrary curve.
(c)
A 2-cell corresponds to a 2-dimensional surface. For example, the map c : [0,R] × [0,2π] V by
c : (r,𝜃) ↦→ (rcos𝜃,r sin 𝜃)

can be thought of as a disk of radius R.

Then, to define an integral

∫

 cα

for a differential k-form α and a k-cell c : [0,1]k V , we simply take the pullback

∫      ∗
    k c α
  [0,1]

Since cα is a k-form on the k-dimensional unit box, it can be written as f(x1,,xn) dx1 ⋅⋅⋅dxn, so the above integral can be written as

∫     ∫
  1     1
 0 ... 0 f (x1,...,xn) dx1 ∧ ⋅⋅⋅∧ dxn

Example 44.3.3 (Area of a circle)
Consider V = 2 and let c : (r,𝜃)↦→(r cos𝜃)e1 + (r sin𝜃)e2 on [0,R] × [0,2π] as before. Take the 2-form α which gives αp = e1e2 at every point p. Then

cα = (cos𝜃dr − rsin𝜃d 𝜃) (sin 𝜃dr + r cos𝜃d𝜃)
= r(cos2𝜃 + sin2𝜃)(dr d𝜃)
= r dr d𝜃

Thus,

∫      ∫ R∫ 2π
  α =          r dr ∧ d𝜃 = πR2
 c      0  0

which is the area of a circle.

Here’s some geometric intuition for what’s happening. Given a k-cell in V , a differential k-form α accepts a point p and some tangent vectors v1, …, vk and spits out a number αp(v1,,vk), which as before we view as a signed hypervolume. Then the integral adds up all these infinitesimals across the entire cell. In particular, if V = k and we take the form α : p↦→e1⋅⋅⋅ek, then what these α’s give is the kth hypervolume of the cell. For this reason, this α is called the volume form on k.

You’ll notice I’m starting to play loose with the term “cell”: while the cell c : [0,R] × [0,2π] 2 is supposed to be a function I have been telling you to think of it as a unit disk (i.e. in terms of its image). In the same vein, a curve [0,1] V should be thought of as a curve in space, rather than a function on time.

This error turns out to be benign. Let α be a k-form on U and c : [a1,b1] ×⋅⋅⋅× [ak,bk] U a k-cell. Suppose ϕ : [a1,b1] ×[ak,bk] [a1,b1] ×⋅⋅⋅× [ak,bk]; it is a reparametrization if ϕ is bijective and ()p is always invertible (think “change of variables”); thus

c∘ϕ : [a′1,b′1]× ⋅⋅⋅× [a ′k,b′k] → U

is a k-cell as well. Then it is said to preserve orientation if det()p > 0 for all p and reverse orientation if det()p < 0 for all p.

Exercise 44.3.4. Why is it that exactly one of these cases must occur?

Theorem 44.3.5 (Changing variables doesn’t affect integrals)
Let c be a k-cell, α a k-form, and ϕ a reparametrization. Then

∫        {∫
           cα     ϕ preserves orientation
  c∘ϕ α =  − ∫ α   ϕ reverses orientation.
             c

Proof. Use naturality of the pullback to reduce it to the corresponding theorem in normal calculus. □

So for example, if we had parametrized the unit circle as [0,1] × [0,1] 2 by (r,t)↦→R cos(2πt)e1 + R sin(2πt)e2, we would have arrived at the same result. So we really can think of a k-cell just in terms of the points it specifies.

44.4  Boundaries

Prototypical example for this section: The boundary of [a,b] is {b}−{a}. The boundary of a square goes around its edge counterclockwise.

First, I introduce a technical term that lets us consider multiple cells at once.

Definition 44.4.1. A k-chain U is a formal linear combination of k-cells over U, i.e. a sum of the form

c = a1c1 + ⋅⋅⋅+ amcm

where each ai and ci is a k-cell. We define cα = iai ci.

In particular, a 0-chain consists of several points, each with a given weight.

Now, how do we define the boundary? For a 1-cell [a,b] U, as I hinted earlier we want the answer to be the 0-chain {c(b)}−{c(a)}. Here’s how we do it in general.

Definition 44.4.2. Suppose c : [0,1]k U is a k-cell. Then the boundary of c, denoted ∂c : [0,1]k1 U, is the (k 1)-chain defined as follows. For each i = 1,,k define

cistart(t1,,tk1) = (t1,,ti1,0,ti,,tk)
cistop(t1,,tk1) = (t1,,ti1,1,ti,,tk).

Then

     ∑k        (           )
∂c :=   (− 1 )i+1 cstop− cstart .
     i=1          i     i

Finally, the boundary of a chain is the sum of the boundaries of each cell (with the appropriate weights). That is, ( aici) = ai∂ci.

Question 44.4.3. Satisfy yourself that one can extend this definition to a k-cell c defined on c : [a1,b1] ×⋅⋅⋅× [ak,bk] V (rather than from [0,1]k V ).

Example 44.4.4 (Examples of boundaries)
Consider the 2-cell c : [0,1]2 2 shown below.

Here p1, p2, p3, p4 are the images of (0,0), (0,1), (1,0), (1,1), respectively. Then we can think of ∂c as

∂c = [p1,p2]+ [p2,p3]+ [p3,p4]+ [p4,p1]

where each “interval” represents the 1-cell shown by the reddish arrows on the right. We can take the boundary of this as well, and obtain an empty chain as

       ∑ 4              ∑4
∂(∂c) =    ∂([pi,pi+1]) =    {pi+1}− {pi} = 0.
        i=1              i=1

Example 44.4.5 (Boundary of a unit disk)
Consider the unit disk given by

                    2
c : [0,1]× [0,2π] → ℝ    by  (r,𝜃) ↦→ scos(2πt)e1 + ssin(2πt)e2.

The four parts of the boundary are shown in the picture below:

Note that two of the arrows more or less cancel each other out when they are integrated. Moreover, we interestingly have a degenerate 1-cell at the center of the circle; it is a constant function [0,1] 2 which always gives the origin.

Obligatory theorem, analogous to d2 = 0 and left as a problem.

Theorem 44.4.6 (The boundary of the boundary is empty)
2 = 0, in the sense that for any k-chain c we have 2(c) = 0.

44.5  Stokes’ theorem

Prototypical example for this section: [a,b]dg = g(b) g(a).

We now have all the ingredients to state Stokes’ theorem for cells.

Theorem 44.5.1 (Stokes’ theorem for cells)
Take U V as usual, let c : [0,1]k U be a k-cell and let α : U Λk1(V ) be a k 1-form. Then

∫       ∫

 c dα =  ∂cα.

In particular, if = 0 then the left-hand side vanishes.

For example, if c is the interval [a,b] then ∂c = {b}−{a}, and thus we obtain the fundamental theorem of calculus.

44.6  A few harder problems to think about

Problem 44A (Green’s theorem). Let f,g : 2 be smooth functions. Prove that

∫ (         )           ∫
    ∂g-− ∂f-  dx ∧ dy =    (f dx+ g dy).
 c  ∂x   ∂y              ∂c

Problem 44B. Show that 2 = 0.

Problem 44C (Pullback and d commute). Let U and Ube open sets of vector spaces V and V and let ϕ : U Ube a smooth map between them. Prove that for any differential form α on Uwe have

ϕ∗(dα) = d(ϕ∗α).

Problem 44D (Arc length isn’t a form). Show that there does not exist a 1-form α on 2 such that for a curve c : [0,1] 2, the integral cα gives the arc length of c.

Problem 44E. An exact k-form α is one satisfying α = for some β. Prove that

∫       ∫
    α =     α
  C1     C2

where C1 and C2 are any concentric circles in the plane and α is some exact 1-form.

45  A bit of manifolds

Last chapter, we stated Stokes’ theorem for cells. It turns out there is a much larger class of spaces, the so-called smooth manifolds, for which this makes sense.

Unfortunately, the definition of a smooth manifold is complete garbage, and so by the time I am done defining differential forms and orientations, I will be too lazy to actually define what the integral on it is, and just wave my hands and state Stokes’ theorem.

45.1  Topological manifolds

Prototypical example for this section: S2: “the Earth looks flat”.

Long ago, people thought the Earth was flat, i.e. homeomorphic to a plane, and in particular they thought that π2(Earth) = 0. But in fact, as most of us know, the Earth is actually a sphere, which is not contractible and in particular π2(Earth)∼=. This observation underlies the definition of a manifold:

An n-manifold is a space which locally looks like n.

Actually there are two ways to think about a topological manifold M:

Question 45.1.1. Check that these are equivalent.

While the first one is the best motivation for examples, the second one is easier to use formally.

Definition 45.1.2. A topological n-manifold M is a Hausdorff space with an open cover {Ui} of sets homeomorphic to subsets of n, say by homeomorphisms

       ∼=
ϕi : Ui−→ Ei ⊆ ℝn

where each Ei is an open subset of n. Each ϕi : Ui Ei is called a chart, and together they form a so-called atlas.

Remark 45.1.3 — Here “E” stands for “Euclidean”. I think this notation is not standard; usually people just write ϕi(Ui) instead.

Remark 45.1.4 — This definition is nice because it doesn’t depend on embeddings: a manifold is an intrinsic space M, rather than a subset of N for some N. Analogy: an abstract group G is an intrinsic object rather than a subgroup of Sn.

Example 45.1.5 (An atlas on S1)
Here is a picture of an atlas for S1, with two open sets.

Question 45.1.6. Where do you think the words “chart” and “atlas” come from?

Example 45.1.7 (Some examples of topological manifolds)

(a)
As discussed at length, the sphere S2 is a 2-manifold: every point in the sphere has a small open neighborhood that looks like D2. One can cover the Earth with just two hemispheres, and each hemisphere is homeomorphic to a disk.
(b)
The circle S1 is a 1-manifold; every point has an open neighborhood that looks like an open interval.
(c)
The torus, Klein bottle, ℝℙ2 are all 2-manifolds.
(d)
n is trivially a manifold, as are its open sets.

All these spaces are compact except n.

A non-example of a manifold is Dn, because it has a boundary; points on the boundary do not have open neighborhoods that look Euclidean.

45.2  Smooth manifolds

Prototypical example for this section: All the topological manifolds.

Let M be a topological n-manifold with atlas {Uiϕi
−→Ei}.

Definition 45.2.1. For any i, j such that Ui Uj, the transition map ϕij is the composed map

                       − 1
          img          ϕi          ϕj       img
ϕij : Ei ∩ ϕi (Ui ∩Uj )−−→ Ui ∩ Uj −→ Ej ∩ ϕj (Ui ∩ Uj ).

Sorry for the dense notation, let me explain. The intersection with the image ϕiimg(Ui Uj) and the image ϕjimg(Ui Uj) is a notational annoyance to make the map well-defined and a homeomorphism. The transition map is just the natural way to go from Ei Ej, restricted to overlaps. Picture below, where the intersections are just the green portions of each E1 and E2:

We want to add enough structure so that we can use differential forms.

Definition 45.2.2. We say M is a smooth manifold if all its transition maps are smooth.

This definition makes sense, because we know what it means for a map between two open sets of n to be differentiable.

With smooth manifolds we can try to port over definitions that we built for n onto our manifolds. So in general, all definitions involving smooth manifolds will reduce to something on each of the coordinate charts, with a compatibility condition.

AS an example, here is the definition of a “smooth map”:

Definition 45.2.3.

(a)
Let M be a smooth manifold. A continuous function f : M is called smooth if the composition
   ϕ−i1          f
Ei−−→  Ui `→  M −→ ℝ

is smooth as a function Ei .

(b)
Let M and N be smooth with atlases {UiMϕ−i→EiM}i and {UjNϕ−→jEiN}j, A map f : M N is smooth if for every i and j, the composed map
    ϕ−1                     ϕ
Ei −−i→  Ui `→ M  −→f N  ↠ Uj −j→  Ej

is smooth, as a function Ei Ej.

45.3  Regular value theorem

Prototypical example for this section: x2 + y2 = 1 is a circle!

Despite all that I’ve written about general manifolds, it would be sort of mean if I left you here because I have not really told you how to actually construct manifolds in practice, even though we know the circle x2 + y2 = 1 is a great example of a one-dimensional manifold embedded in 2.

Theorem 45.3.1 (Regular value theorem)
Let V be an n-dimensional real normed vector space, let U V be open and let f1,,fm: U be smooth functions. Let M be the set of points p U such that f1(p) = ⋅⋅⋅ = fm(p) = 0.

Assume M is nonempty and that the map

V →  ℝm   by  v ↦→  ((Df1 )p(v ),...,(Dfm )p(v))

has rank m, for every point p M. Then M is a manifold of dimension n m.

For a proof, see [?, Theorem 6.3].

One very common special case is to take m = 1 above.

Corollary 45.3.2 (Level hypersurfaces)
Let V be a finite-dimensional real normed vector space, let U V be open and let f : U be smooth. Let M be the set of points p U such that f(p) = 0. If Mand (Df)p is not the zero map for any p M, then M is a manifold of dimension n 1.

Example 45.3.3 (The circle x2 + y2 c = 0)
Let f(x,y) = x2 + y2 c, f : 2 , where c is a positive real number. Note that

Df = 2x ⋅dx + 2y ⋅dy

which in particular is nonzero as long as (x,y)(0,0), i.e. as long as c0. Thus:

We won’t give further examples since I’m only mentioning this in passing in order to increase your capacity to write real concrete examples. (But [?, Chapter 6.2] has some more examples, beautifully illustrated.)

45.4  Differential forms on manifolds

We already know what a differential form is on an open set U n. So, we naturally try to port over the definition of differentiable form on each subset, plus a compatibility condition.

Let M be a smooth manifold with atlas {Uiϕi
−→Ei}i.

Definition 45.4.1. A differential k-form α on a smooth manifold M is a collection {αi}i of differential k-forms on each Ei, such that for any j and i we have that

αj = ϕ∗ij(αi).

In English: we specify a differential form on each chart, which is compatible under pullbacks of the transition maps.

45.5  Orientations

Prototypical example for this section: Left versus right, clockwise vs. counterclockwise.

This still isn’t enough to integrate on manifolds. We need one more definition: that of an orientation.

The main issue is the observation from standard calculus that

∫ b            ∫ a
   f(x) dx = −    f(x) dx.
 a              b

Consider then a space M which is homeomorphic to an interval. If we have a 1-form α, how do we integrate it over M? Since M is just a topological space (rather than a subset of ), there is no default “left” or “right” that we can pick. As another example, if M = S1 is a circle, there is no default “clockwise” or “counterclockwise” unless we decide to embed M into 2.

To work around this we have to actually have to make additional assumptions about our manifold.

Definition 45.5.1. A smooth n-manifold is orientable if there exists a differential n-form ω on M such that for every p M,

ωp ⁄= 0.

Recall here that ωp is an element of Λn(V ). In that case we say ω is a volume form of M.

How do we picture this definition? If we recall that an differential form is supposed to take tangent vectors of M and return real numbers. To this end, we can think of each point p M as having a tangent plane Tp(M) which is n-dimensional. Now since the volume form ω is n-dimensional, it takes an entire basis of the Tp(M) and gives a real number. So a manifold is orientable if there exists a consistent choice of sign for the basis of tangent vectors at every point of the manifold.

For “embedded manifolds”, this just amounts to being able to pick a nonzero field of normal vectors to each point p M. For example, S1 is orientable in this way.

Similarly, one can orient a sphere S2 by having a field of vectors pointing away (or towards) the center. This is all non-rigorous, because I haven’t defined the tangent plane Tp(M); since M is in general an intrinsic object one has to be quite roundabout to define Tp(M) (although I do so in an optional section later). In any event, the point is that guesses about the orientability of spaces are likely to be correct.

Example 45.5.2 (Orientable surfaces)

(a)
Spheres Sn, planes, and the torus S1 × S1 are orientable.
(b)
The Möbius strip and Klein bottle are not orientable: they are “one-sided”.
(c)
ℂℙn is orientable for any n.
(d)
ℝℙn is orientable only for odd n.

45.6  Stokes’ theorem for manifolds

Stokes’ theorem in the general case is based on the idea of a manifold with boundary M, which I won’t define, other than to say its boundary ∂M is an n1 dimensional manifold, and that it is oriented if M is oriented. An example is M = D2, which has boundary ∂M = S1.

Next,

Definition 45.6.1. The support of a differential form α on M is the closure of the set

{p ∈ M  | αp ⁄= 0}.

If this support is compact as a topological space, we say α is compactly supported.

Remark 45.6.2 — For example, volume forms are supported on all of M.

Now, one can define integration on oriented manifolds, but I won’t define this because the definition is truly awful. Then Stokes’ theorem says

Theorem 45.6.3 (Stokes’ theorem for manifolds)
Let M be a smooth oriented n-manifold with boundary and let α be a compactly supported n 1-form. Then

∫        ∫
    dα =     α.
 M        ∂M

All the omitted details are developed in full in [?].

45.7  (Optional) The tangent and contangent space

Prototypical example for this section: Draw a line tangent to a circle, or a plane tangent to a sphere.

Let M be a smooth manifold and p M a point. I omitted the definition of Tp(M) earlier, but want to actually define it now.

As I said, geometrically we know what this should look like for our usual examples. For example, if M = S1 is a circle embedded in 2, then the tangent vector at a point p should just look like a vector running off tangent to the circle. Similarly, given a sphere M = S2, the tangent space at a point p along the sphere would look like plane tangent to M at p.

However, one of the points of all this manifold stuff is that we really want to see the manifold as an intrinsic object, in its own right, rather than as embedded in n.1 So, we would like our notion of a tangent vector to not refer to an ambient space, but only to intrinsic properties of the manifold M in question.

45.7.i  Tangent space

To motivate this construction, let us start with an embedded case for which we know the answer already: a sphere.

Suppose f : S2 is a function on a sphere, and take a point p. Near the point p, f looks like a function on some open neighborhood of the origin. Thus we can think of taking a directional derivative along a vector v in the imagined tangent plane (i.e. some partial derivative). For a fixed v this partial derivative is a linear map

D ⃗v: C ∞ (M ) → ℝ.

It turns out this goes the other way: if you know what Dv does to every smooth function, then you can recover v. This is the trick we use in order to create the tangent space. Rather than trying to specify a vector v directly (which we can’t do because we don’t have an ambient space),

The vectors are partial-derivative-like maps.

More formally, we have the following.

Definition 45.7.1. A derivation D at p is a linear map D: C(M) (i.e. assigning a real number to every smooth f) satisfying the following Leibniz rule: for any f, g we have the equality

D (fg) = f (p )⋅D (g )+ g(p)⋅D (f) ∈ ℝ.

This is just a “product rule”. Then the tangent space is easy to define:

Definition 45.7.2. A tangent vector is just a derivation at p, and the tangent space Tp(M) is simply the set of all these tangent vectors.

In this way we have constructed the tangent space.

45.7.ii  The cotangent space

In fact, one can show that the product rule for D is equivalent to the following three conditions:

1.
D is linear, meaning D(af + bg) = aD(f) + bD(g).
2.
D(1M) = 0, where 1M is the constant function on M.
3.
D(fg) = 0 whenever f(p) = g(p) = 0. Intuitively, this means that if a function h = fg vanishes to second order at p, then its derivative along D should be zero.

This suggests a third equivalent definition: suppose we define

𝔪p := {f ∈ C ∞M  | f(p) = 0}

to be the set of functions which vanish at p (this is called the maximal ideal at p). In that case,

     {                          }
       ∑
𝔪2p =      fi ⋅gi | fi(p) = gi(p) = 0
        i

is the set of functions vanishing to second order at p. Thus, a tangent vector is really just a linear map

     2
𝔪p∕ 𝔪p → ℝ.

In other words, the tangent space is actually the dual space of 𝔪p𝔪p2; for this reason, the space 𝔪p𝔪p2 is defined as the cotangent space (the dual of the tangent space). This definition is even more abstract than the one with derivations above, but has some nice properties:

45.7.iii  Sanity check

With all these equivalent definitions, the last thing I should do is check that this definition of tangent space actually gives a vector space of dimension n. To do this it suffices to show verify this for open subsets of n, which will imply the result for general manifolds M (which are locally open subsets of n). Using some real analysis, one can prove the following result:

Theorem 45.7.3
Suppose M n is open and 0 M. Then

𝔪0 = {smooth  functions f : f(0) = 0}
  2
𝔪 0 = {smooth functions f : f(0) = 0,(∇f )0 = 0}.

In other words 𝔪02 is the set of functions which vanish at 0 and such that all first derivatives of f vanish at zero.

Thus, it follows that there is an isomorphism

                      [                  ]
     2∼   n            -∂f-       -∂f-
𝔪0∕𝔪 0= ℝ    by  f ↦→   ∂x1 (0),...,∂xn (0)

and so the cotangent space, hence tangent space, indeed has dimension n.

45.8  A few harder problems to think about

Problem 45A. Show that a differential 0-form on a smooth manifold M is the same thing as a smooth function M .

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

Part XIII
Algebraic NT I: Rings of Integers

46  Algebraic integers

Here’s a first taste of algebraic number theory.

This is really close to the border between olympiads and higher math. You’ve always known that a + √2--b had a “norm” a2 2b2, and that somehow this norm was multiplicative. You’ve also always known that roots come in conjugate pairs. You might have heard of minimal polynomials but not know much about them.

This chapter and the next one will make all these vague notions precise. It’s drawn largely from the first chapter of [?].

46.1  Motivation from high school algebra

This is adapted from my blog, Power Overwhelming1 .

In high school precalculus, you’ll often be asked to find the roots of some polynomial with integer coefficients. For instance,

  3   2                    2
x  − x  − x− 15 = (x − 3)(x + 2x + 5)

has roots 3, 1 + 2i, 1 2i. Or as another example,

x3 − 3x2 − 2x + 2 = (x+ 1)(x2 − 4x+ 2)

has roots 1, 2 + √2--, 2 √2--. You’ll notice that the irrational roots, like 1 ± 2i and 2 ±√ --
  2, are coming up in pairs. In fact, I think precalculus explicitly tells you that the imaginary roots come in conjugate pairs. More generally, it seems like all the roots of the form a + b√ -
  c come in “conjugate pairs”. And you can see why.

But a polynomial like

x3 − 8x + 4

has no rational roots. (The roots of this are approximately 3.0514, 0.51730, 2.5341.) Or even simpler,

x3 − 2

has only one real root, √ --
3 2. These roots, even though they are irrational, have no “conjugate” pairs. Or do they?

Let’s try and figure out exactly what’s happening. Let α be any complex number. We define a minimal polynomial of α over to be a polynomial such that

Example 46.1.1 (Examples of minimal polynomials)

(a)
  --
√ 2 has minimal polynomial x2 2.
(b)
The imaginary unit i = √ − 1 has minimal polynomial x2 + 1.
(c)
A primitive pth root of unity, ζp = e2πpi , has minimal polynomial xp1+xp2+⋅⋅⋅+1, where p is a prime.

Note that 100x2 200 is also a polynomial of the same degree which has √ --
  2 as a root; that’s why we want to require the polynomial to be monic. That’s also why we choose to work in the rational numbers; that way, we can divide by leading coefficients without worrying if we get non-integers.

Why do we care? The point is as follows: suppose we have another polynomial A(x) such that A(α) = 0. Then we claim that P(x) actually divides A(x)! That means that all the other roots of P will also be roots of A.

The proof is by contradiction: if not, by polynomial long division we can find a quotient and remainder Q(x), R(x) such that

A (x) = Q (x)P(x) + R(x)

and R(x)≢0. Notice that by plugging in x = α, we find that R(α) = 0. But deg R < deg P, and P(x) was supposed to be the minimal polynomial. That’s impossible!

It follows from this and the monotonicity of the minimal polynomial that it is unique (when it exists), so actually it is better to refer to the minimal polynomial.

Exercise 46.1.2. Can you find an element in that has no minimal polynomial?

Let’s look at a more concrete example. Consider A(x) = x3 3x2 2x + 2 from the beginning. The minimal polynomial of 2 + √2-- is P(x) = x2 4x + 2 (why?). Now we know that if 2 + √ --
  2 is a root, then A(x) is divisible by P(x). And that’s how we know that if 2 + √--
 2 is a root of A, then 2 √ --
  2 must be a root too.

As another example, the minimal polynomial of 3√--
 2 is x3 2. So √3--
  2 actually has two conjugates, namely, α = √ --
3 2(cos120∘ + isin 120∘) and β = √ --
 32(cos240∘ + isin 240∘). Thus any polynomial which vanishes at √ --
3 2 also has α and β as roots!

Question 46.1.3 (Important but tautological: irreducible minimal). Let α be a root of the polynomial P(x). Show that P(x) is the minimal polynomial if and only if it is irreducible.

46.2  Algebraic numbers and algebraic integers

Prototypical example for this section: √2-- is an algebraic integer (root of x2 2), 1
2 is an algebraic number but not an algebraic integer (root of x 1
2).

Let’s now work in much vaster generality. First, let’s give names to the new numbers we’ve discussed above.

Definition 46.2.1. An algebraic number is any α which is the root of some polynomial with coefficients in . The set of algebraic numbers is denoted .

Remark 46.2.2 — One can equally well say algebraic numbers are those that are roots of some polynomial with coefficients in (rather than ), since any polynomial in [x] can be scaled to one in [x].

Definition 46.2.3. Consider an algebraic number α and its minimal polynomial P (which is monic and has rational coefficients). If it turns out the coefficients of P are integers, then we say α is an algebraic integer.

The set of algebraic integers is denoted .

Remark 46.2.4 — One can show, using Gauss’s Lemma, that if α is the root of any monic polynomial with integer coefficients, then α is an algebraic integer. So in practice, if I want to prove that √ --
  2 + √ --
  3 is an algebraic integer, then I only have to say “the polynomial (x2 5)2 24 works” without checking that it’s minimal.

Sometimes for clarity, we refer to elements of as rational integers.

Example 46.2.5 (Examples of algebraic integers)
The numbers

4, i = √ −-1, 3√2, √2-+ √3-

are all algebraic integers, since they are the roots of the monic polynomials x4, x2 +1, x3 2 and (x2 5)2 24.

The number 1
2 has minimal polynomial x 1
2, so it’s an algebraic number but not an algebraic integer. (In fact, the rational root theorem also directly implies that any monic integer polynomial does not have 1
2 as a root!)

There are two properties I want to give for these off the bat, because they’ll be used extensively in the tricky (but nice) problems at the end of the section. The first we prove now, since it’s very easy:

Proposition 46.2.6 (Rational algebraic integers are rational integers)
An algebraic integer is rational if and only if it is a rational integer. In symbols,

--
ℤ∩ ℚ  = ℤ.

Proof. Let α be a rational number. If α is an integer, it is the root of x α, hence an algebraic integer too.

Conversely, if P is a monic polynomial with integer coefficients such that P(α) = 0 then (by the rational root theorem, say) it follows α must be an integer. □

The other is that:

Proposition 46.2.7 ( is a ring and is a field)
The algebraic integers form a ring. The algebraic numbers form a field.

We could prove this now if we wanted to, but the results in the next chapter will more or less do it for us, and so we take this on faith temporarily.

46.3  Number fields

Prototypical example for this section: (√2--) is a typical number field.

Given any algebraic number α, we’re able to consider fields of the form (α). Let us write down the more full version.

Definition 46.3.1. A number field K is a field containing as a subfield which is a finite-dimensional -vector space. The degree of K is its dimension.

Example 46.3.2 (Prototypical example)
Consider the field

             {                 }
K =  ℚ(√2-) =  a+  b√2--| a,b ∈ ℚ .

This is a field extension of , and has degree 2 (the basis being 1 and √ --
  2).

You might be confused that I wrote (√ --
  2) (which should permit denominators) instead of [√ --
  2], say. But if you read through ?? , you should see that the denominators don’t really matter: 31−√2- = 17(3 + √ --
  2) anyways, for example. You can either check this now in general, or just ignore the distinction and pretend I wrote square brackets everywhere.

Exercise 46.3.3 (Unimportant). Show that if α is an algebraic number, then (α)∼
=[α].

Example 46.3.4 (Adjoining an algebraic number)
Let α be the root of some irreducible polynomial P(x) in . The field (α) is a field extension as well, and the basis is 1,α,α2,m1, where m is the degree of α. In particular, the degree of (α) is just the degree of α.

Example 46.3.5 (Non-examples of number fields)
and are not number fields since there is no finite -basis of them.

46.4  Primitive element theorem, and monogenic extensions

Prototypical example for this section: (√ --
  3,√ --
  5)∼=(√ --
  3 + √--
 5). Can you see why?

I’m only putting this theorem here because I was upset that no one told me it was true (it’s a very natural conjecture), and I hope to not do the same to the reader. However, I’m not going to use it in anything that follows.

Theorem 46.4.1 (Artin’s primitive element theorem)
Every number field K is isomorphic to (α) for some algebraic number α.

The proof is left as ?? , since to prove it I need to talk about field extensions first.

The prototypical example

ℚ(√3,-√5-) ∼ ℚ (√3-+ √5-)
          =

makes it clear why this theorem should not be too surprising.

46.5  A few harder problems to think about

Problem 46A. Find a polynomial with integer coefficients which has √--
 2 + √ --
 33 as a root.

Problem 46B (Brazil 2006).       PICLet p be an irreducible polynomial in [x] and degree larger than 1. Prove that if p has two roots r and s whose product is 1 then the degree of p is even.

Problem 46C. Consider n roots of unity 𝜀1, …, 𝜀n. Assume the average 1n(𝜀1 + ⋅⋅⋅ + 𝜀n) is an algebraic integer. Prove that either the average is zero or 𝜀1 = ⋅⋅⋅ = 𝜀n. (Used in ?? .)

Problem 46D.       PICWhich rational numbers q satisfy cos() ?

Problem 46E (MOP 2010). There are n > 2 lamps arranged in a circle; initially one is on and the others are off. We may select any regular polygon whose vertices are among the lamps and toggle the states of all the lamps simultaneously. Show it is impossible to turn all lamps off.

Problem 46F (Kronecker’s theorem).     PICPICLet α be an algebraic integer. Suppose all its Galois conjugates have absolute value one. Prove that αN = 1 for some positive integer N.

Problem 46G.     PICPICIs there an algebraic integer with absolute value one which is not a root of unity?

Problem 46H. Is the ring of algebraic integers Noetherian?

47  The ring of integers

47.1  Norms and traces

Prototypical example for this section: a + b√ --
  2 as an element of (√--
 2) has norm a2 2b2 and trace 2a.

Remember when you did olympiads and we had like a2 + b2 was the “norm” of a + bi? Cool, let me tell you what’s actually happening.

First, let me make precise the notion of a conjugate.

Definition 47.1.1. Let α be an algebraic number, and let P(x) be its minimal polynomial, of degree m. Then the m roots of P are the (Galois) conjugates of α.

It’s worth showing at the moment that there are no repeated conjugates.

Lemma 47.1.2 (Irreducible polynomials have distinct roots)
An irreducible polynomial in [x] cannot have a complex double root.

Proof. Let f(x) [x] be the irreducible polynomial and assume it has a double root α. Take the derivative f(x). This derivative has three interesting properties.

Consider g = gcd(f,f). We must have g [x] by Euclidean algorithm. But the first two facts about fensure that g is nonconstant and deg g < deg f. Yet g divides f, contradiction to the fact that f should be a minimal polynomial. □

Hence α has exactly as many conjugates as the degree of α.

Now, we would like to define the norm of an element N(α) as the product of its conjugates. For example, we want 2 + i to have norm (2 + i)(2 i) = 5, and in general for a + bi to have norm a2 + b2. It would be really cool if the norm was multiplicative; we already know this is true for complex numbers!

Unfortunately, this doesn’t quite work: consider

N(2+ i) = 5 and N(2 − i) = 5.

But (2 + i)(2 i) = 5, which doesn’t have norm 25 like we want, since 5 is degree 1 and has no conjugates at all. The reason this “bad” thing is happening is that we’re trying to define the norm of an element, when we really ought to be defining the norm of an element with respect to a particular K.

What I’m driving at is that the norm should have different meanings depending on which field you’re in. If we think of 5 as an element of , then its norm is 5. But thought of as an element of (i), its norm really ought to be 25. Let’s make this happen: for K a number field, we will now define NK∕(α) to be the norm of α with respect to K as follows.

Definition 47.1.3. Let α K have degree n, so (α) K, and set k = (deg K)∕n. The norm of α is defined as

           ( ∏                )k
NK ∕ℚ(α) :=      Galois conj of α  .

The trace is defined as

              (∑                 )
Tr   (α ) := k ⋅    Galois conj of α .
  K∕ℚ

The exponent of k is a “correction factor” that makes the norm of 5 into 52 = 25 when we view 5 as an element of (i) rather than an element of . For a “generic” element of K, we expect k = 1.

Exercise 47.1.4. Use what you know about nested vector spaces to convince yourself that k is actually an integer.

Example 47.1.5 (Norm of a + b√--
 2)
Let α = a+b√ --
  2 (√ --
  2) = K. If b0, then α and K have the degree 2. Thus the only conjugates of α are a ± b√ --
  2, which gives the norm

      √--      √--
(a + b 2)(a − b 2) = a2 − 2b2,

The trace is (a b√ --
  2) + (a + b√--
 2) = 2a.

Nicely, the formula a2 2b2 and 2a also works when b = 0.

Of importance is:

Proposition 47.1.6 (Norms and traces are rational integers)
If α is an algebraic integer, its norm and trace are rational integers.

Question 47.1.7. Prove it. (Vieta formula.)

That’s great, but it leaves a question unanswered: why is the norm multiplicative? To do this, I have to give a new definition of norm and trace.

Theorem 47.1.8 (Morally correct definition of norm and trace)
Let K be a number field of degree n, and let α K. Let μα: K K denote the map

x ↦→  αx

viewed as a linear map of -vector spaces. Then,

Since the trace and determinant don’t depend on the choice of basis, you can pick whatever basis you want and use whatever definition you got in high school. Fantastic, right?

Example 47.1.9 (Explicit computation of matrices for a + b√ --
  2)
Let K = (√--
 2), and let 1, √--
 2 be the basis of K. Let

α = a + b√2-

(possibly even b = 0), and notice that

(     √ -)(     √ -)                        √ --
 a + b  2   x+ y  2  = (ax + 2yb)+ (bx + ay)  2.

We can rewrite this in matrix form as

[a  2b] [x]   [ax + 2yb]
            =           .
  b  a   y      bx+ ay

Consequently, we can interpret μα as the matrix

     [     ]
      a  2b
μα =  b   a  .

Of course, the matrix will change if we pick a different basis, but the determinant and trace do not: they are always given by

          2    2
detμ α = a − 2b  and Trμ α = 2a.

This interpretation explains why the same formula should work for a + b√ --
  2 even in the case b = 0.

Proof. I’ll prove the result for just the norm; the trace falls out similarly. Set

n = deg α,    kn = deg K.

The proof is split into two parts, depending on whether or not k = 1.

Proof if k = 1. Set n = deg α = deg K. Thus the norm actually is the product of the Galois conjugates. Also,

{1,α, ...,αn −1}

is linearly independent in K, and hence a basis (as dimK = n). Let’s use this as the basis for μα.

Let

xn + cn−1xn− 1 + ⋅⋅⋅+ c0

be the minimal polynomial of α. Thus μα(1) = α, μα(α) = α2, and so on, but μα(αn1) = cn1αn1 ⋅⋅⋅c0. Therefore, μα is given by the matrix

     ⌊                       ⌋
       0  0  0 ...  0   − c0
     | 1  0  0 ...  0   − c1 |
     || 0  1  0 ...  0   − c2 ||
M  = || .  .  .  .            || .
     ⌈ ..  ..  ..   ..  0  − cn−2⌉
       0  0  0 ...  1  − cn−1

Thus

             n
detM  = (− 1) c0

and we’re done by Vieta’s formulas.

Proof if k > 1. We have nested vector spaces

ℚ ⊆ ℚ (α ) ⊆ K.

Let e1, …, ek be a (α)-basis for K (meaning: interpret K as a vector space over (α), and pick that basis). Since {1,α,n1} is a basis for (α), the elements

e1, e1α,  ..., e1αn− 1
e2, e2α,  ..., e2αn− 1
 ..    ..   ..      ..
 .    .     .     .n− 1
ek, ekα,  ..., ekα

constitute a -basis of K. Using this basis, the map μα looks like

⌊               ⌋
 M
||    M          ||
|⌈         ...    |⌉
              M
◟-------◝◜-------◞
      k times

where M is the same matrix as above: we just end up with one copy of our old matrix for each ei. Thus detμα = (detM)k, as needed.

Question 47.1.10. Verify the result for traces as well.

From this it follows immediately that

NK ∕ℚ (α β) = NK∕ℚ (α )NK ∕ℚ(β)

because by definition we have

μαβ = μα ∘μ β,

and that the determinant is multiplicative. In the same way, the trace is additive.

47.2  The ring of integers

Prototypical example for this section: If K = (√ --
  2), then 𝒪K = [√--
 2]. But if K = (√--
 5), then 𝒪K = [  √ -
1+--5
  2].

makes for better number theory than . In the same way, focusing on the algebraic integers of K gives us some really nice structure, and we’ll do that here.

Definition 47.2.1. Given a number field K, we define

          --
𝒪K := K ∩ ℤ

to be the ring of integers of K; in other words 𝒪K consists of the algebraic integers of K.

We do the classical example of a quadratic field now. Before proceeding, I need to write a silly number theory fact.

Exercise 47.2.2 (Annoying but straightforward). Let a and b be rational numbers, and d a squarefree positive integer.

You’ll need to take mod 4.

Example 47.2.3 (Ring of integers of K = (√ --
  3))
Let K be as above. We claim that

        √--   {      √ --        }
𝒪K  = ℤ[ 3] =  m  + n  3 | m, n ∈ ℤ .

We set α = a+b√ --
  3. Then α ∈𝒪K when the minimal polynomial has integer coefficients.

If b = 0, then the minimal polynomial is x α = x a, and thus α works if and only if it’s an integer. If b0, then the minimal polynomial is

(x − a)2 − 3b2 = x2 − 2a ⋅x + (a2 − 3b2).

From the exercise, this occurs exactly for a,b .

Example 47.2.4 (Ring of integers of K = (√ --
  5))
We claim that in this case

        [    √--]   {           √ --         }
         1+---5-             1+---5-
𝒪K  = ℤ     2    =   m  + n⋅   2    | m, n ∈ ℤ .

The proof is exactly the same, except the exercise tells us instead that for b0, we have both the possibility that a,b or that a,b 1
2. This reflects the fact that 1+√5-
--2-- is the root of x2 x 1 = 0; no such thing is possible with √ --
  3.

In general, the ring of integers of K = (√ --
  d) is

      (   √ --
      |{ ℤ[  d]     d ≡ 2,3  (mod  4)
𝒪   =
  K   |(   [1+√d]
        ℤ    2     d ≡ 1  (mod  4).

What we’re going to show is that 𝒪K behaves in K a lot like the integers do in . First we show K consists of quotients of numbers in 𝒪K. In fact, we can do better:

Example 47.2.5 (Rationalizing the denominator)
For example, consider K = (√--
 3). The number x = -1---
4+√3 is an element of K, but by “rationalizing the denominator” we can write

              √--
---1---   4−---3-
4 + √3--=   13   .

So we see that in fact, x is -1
13 of an integer in 𝒪K.

The theorem holds true more generally.

Theorem 47.2.6 (K = ⋅𝒪K)
Let K be a number field, and let x K be any element. Then there exists an integer n such that nx ∈𝒪K; in other words,

    1
x = --α
    n

for some α ∈𝒪K.

Exercise 47.2.7. Prove this yourself. (Start by using the fact that x has a minimal polynomial with rational coefficients. Alternatively, take the norm.)

Now we are going to show 𝒪K is a ring; we’ll check it is closed under addition and multiplication. To do so, the easiest route is:

Lemma 47.2.8 (α [α] finitely generated)
Let α . Then α is an algebraic integer if and only if the abelian group [α] is finitely generated.

Proof. Note that α is an algebraic integer if and only if it’s the root of some nonzero, monic polynomial with integer coefficients. Suppose first that

 N          N−1         N−2
α   = cN−1α     + cN−2α     + ⋅⋅⋅+ c0.

Then the set 1,α,N1 generates [α], since we can repeatedly replace αN until all powers of α are less than N.

Conversely, suppose that [α] is finitely generated by some b1,,bm. Viewing the bi as polynomials in α, we can select a large integer N (say N = deg b1 +⋅⋅⋅+deg bm +2015) and express αN in the bi’s to get

αN = c1b1(α)+ ⋅⋅⋅+ cmbm (α).

The above gives us a monic polynomial in α, and the choice of N guarantees it is not zero. So α is an algebraic integer. □

Example 47.2.9 (1
2 isn’t an algebraic integer)
We already know 1
2 isn’t an algebraic integer. So we expect

 [  ]
   1-   { a--                   }
ℤ  2  =   2m | a,m ∈ ℤ and m ≥ 0

to not be finitely generated, and this is the case.

Question 47.2.10. To make the last example concrete: name all the elements of [12] that cannot be written as an integer combination of

{                     }
  1, 7, 13 , 2015-,-1--
  2 8 64  4096  1048576

Now we can state the theorem.

Theorem 47.2.11 (Algebraic integers are closed under + and ×)
The set is closed under addition and multiplication; i.e. it is a ring. In particular, 𝒪K is also a ring for any number field K.

Proof. Let α,β . Then [α] and [β] are finitely generated. Hence so is [α,β]. (Details: if [α] has -basis a1,,am and [β] has -basis b1,,bn, then take the mn elements aibj.)

Now [α±β] and [αβ] are subsets of [α,β] and so they are also finitely generated. Hence α ± β and αβ are algebraic integers. □

In fact, something even better is true. As you saw, for (√--
 3) we had 𝒪K = [√ --
  3]; in other words, 𝒪K was generated by 1 and √ --
  3. Something similar was true for (√ --
  5). We claim that in fact, the general picture looks exactly like this.

Theorem 47.2.12 (𝒪K is a free -module of rank n)
Let K be a number field of degree n. Then 𝒪K is a free -module of rank n, i.e. 𝒪K∼
=n as an abelian group. In other words, 𝒪K has a -basis of n elements as

𝒪K  = {c1α1 + ⋅⋅⋅+ cn−1αn− 1 + cnαn | ci ∈ ℤ }

where αi are algebraic integers in 𝒪K.

Proof. TODO: add this in. (Originally, there was an incorrect proof; the mistake was pointed out 2020-02-12 on https://math.stackexchange.com/q/3543641/229197 and I hope to supply a correct one soon.) □

This last theorem shows that in many ways 𝒪K is a “lattice” in K. That is, for a number field K we can find α1, …, αn in 𝒪K such that

𝒪K∼=α1α2⋅⋅⋅αn
K ∼=α1α2⋅⋅⋅αn

as abelian groups.

47.3  On monogenic extensions

Recall that it turned out number fields K could all be expressed as (α) for some α. We might hope that something similar is true of the ring of integers: that we can write

𝒪K  = ℤ[𝜃]

in which case {1,𝜃,,𝜃n1} serves both as a basis of K and as the -basis for 𝒪K (here n = [K : ]). In other words, we hope that the basis of 𝒪K is actually a “power basis”.

This is true for the most common examples we use:

Unfortunately, it is not true in general: the first counterexample is (α) for α a root of X3 X2 2X 8.

We call an extension with this nice property monogenic. As we’ll later see, monogenic extensions have a really nice factoring algorithm, ?? .

47.4  A few harder problems to think about

Problem 47A. Show that α is a unit of 𝒪K (meaning α1 ∈𝒪K) if and only if NK∕(α) = ±1.

Problem 47B. Let K be a number field. What is the field of fractions of 𝒪K?

Problem 47C (Russian olympiad 1984). Find all integers m and n such that

(       )     (       )
 5+ 3√2-- m =  3 + 5√2- n.

Problem 47D (USA TST 2012). Decide whether there exist a,b,c > 2010 satisfying

a3 + 2b3 + 4c3 = 6abc + 1.

Problem 47E (Cyclotomic Field).     PICPIC Let p be an odd rational prime and ζp a primitive pth root of unity. Let K = (ζp). Prove that 𝒪K = [ζp]. (In fact, the result is true even if p is not a prime.)

48  Unique factorization (finally!)

Took long enough.

48.1  Motivation

Suppose we’re interested in solutions to the Diophantine equation n = x2 + 5y2 for a given n. The idea is to try and “factor” n in [√ −-5], for example

        √ ---     √---
6 = (1 +   − 5)(1 − − 5).

Unfortunately, this is not so simple, because as I’ve said before we don’t have unique factorization of elements:

          (    √ --)(    √ --)
6 = 2⋅3 =  1 +   − 5 1 −   − 5 .

One reason this doesn’t work is that we don’t have a notion of a greatest common divisor. We can write (35,77) = 7, but what do we make of (3,1 + √ − 5)?

The trick is to use ideals as a “generalized GCD”. Recall that by (a,b) I mean the ideal {ax + byx,y [√ ---
  − 5]}. You can see that (35,77) = (7), but (3,1 + √ ---
  − 5) will be left “unsimplified” because it doesn’t represent an actual value in the ring. Using these sets (ideals) as elements, it turns out that we can develop a full theory of prime factorization, and we do so in this chapter.

In other words, we use the ideal (a1,,am) to interpret a “generalized GCD” of a1, …, am. In particular, if we have a number x we want to represent, we encode it as just (x).

Going back to our example of 6,

               (    √---) (    √ --)
(6) = (2) ⋅(3) = 1 +  − 5 ⋅ 1 −   − 5 .

Please take my word for it that in fact, the complete prime factorization of (6) into prime ideals is

           √ ---        √ ---      √ ---
(6) = (2,1−   − 5)2(3,1 +   − 5)(3,1−  − 5) = 𝔭2𝔮1𝔮2.

In fact, (2) = 𝔭2, (3) = 𝔮1𝔮2, (1 + √ ---
  − 5) = 𝔭𝔮1, (1 √ ---
  − 5) = 𝔭𝔮2. So 6 indeed factorizes uniquely into ideals, even though it doesn’t factor into elements.

As one can see above, ideal factorization is more refined than element factorization. Once you have the factorization into ideals, you can from there recover all the factorizations into elements. The upshot of this is that if we want to write n as x2 + 5y2, we just have to factor n into ideals, and from there we can recover all factorizations into elements, and finally all ways to write n as x2 + 5y2. Since we can already break n into rational prime factors (for example 6 = 2 3 above) we just have to figure out how each rational prime pn breaks down. There’s a recipe for this, ?? ! In fact, I’ll even tell you what is says in this special case:

In this chapter we’ll develop this theory of unique factorization in full generality.

Remark 48.1.1 — In this chapter, I’ll be using the letters 𝔞, 𝔟, 𝔭, 𝔮 for ideals of 𝒪K. When fractional ideals arise, I’ll use I and J for them.

48.2  Ideal arithmetic

Prototypical example for this section: (x)(y) = (xy). In any case, think in terms of generators.

First, I have to tell you how to add and multiply two ideals 𝔞 and 𝔟.

Definition 48.2.1. Given two ideals 𝔞 and 𝔟 of a ring R, we define

𝔞 + 𝔟 := {a + b | a ∈ 𝔞,b ∈ 𝔟}
𝔞 𝔟 : = {a1b1 + ⋅⋅⋅+ anbn | ai ∈ 𝔞,bi ∈ 𝔟}.

(Note that infinite sums don’t make sense in general rings, which is why in 𝔞 𝔟 we cut off the sum after some finite number of terms.) You can readily check these are actually ideals. This definition is more natural if you think about it in terms of the generators of 𝔞 and 𝔟.

Proposition 48.2.2 (Ideal arithmetic via generators)
Suppose 𝔞 = (a1,a2,...,an ) and 𝔟 = (b1,...,bm) are ideals in a ring R. Then

(a)
𝔞 + 𝔟 is the ideal generated by a1,,an,b1,,bm.
(b)
𝔞 𝔟 is the ideal generated by aibj, for 1 i n and 1 j m.

Proof. Pretty straightforward; just convince yourself that this result is correct. □

In other words, for sums you append the two sets of generators together, and for products you take products of the generators. Note that for principal ideals, this coincides with “normal” multiplication, for example

(3)⋅(5) = (15 )

in .

Remark 48.2.3 — Note that for an ideal 𝔞 and an element c, the set

c𝔞 = {ca | a ∈ 𝔞}

is equal to (c) 𝔞. So “scaling” and “multiplying by principal ideals” are the same thing. This is important, since we’ll be using the two notions interchangably.

Finally, since we want to do factorization we better have some notion of divisibility. So we define:

Definition 48.2.4. We say 𝔞 divides 𝔟 and write 𝔞𝔟 if 𝔞 𝔟.

Note the reversal of inclusions! So (3) divides (15), because (15) is contained in (3); every multiple of 15 is a multiple of 3. And from the example in the previous section: In [√---
 − 5], (3,1 √---
 − 5) divides (3) and (1 √ ---
  − 5).

Finally, the prime ideals are defined as in ?? : 𝔭 is prime if xy 𝔭 implies x 𝔭 or y 𝔭. This is compatible with the definition of divisibility:

Exercise 48.2.5. A nonzero proper ideal 𝔭 is prime if and only if whenever 𝔭 divides 𝔞𝔟, 𝔭 divides one of 𝔞 or 𝔟.

As mentioned in ?? , this also lets us ignore multiplication by units: (3) = (3).

48.3  Dedekind domains

Prototypical example for this section: Any 𝒪K is a Dedekind domain.

We now define a Dedekind domain as follows.

Definition 48.3.1. An integral domain 𝒜 is a Dedekind domain if it is Noetherian, integrally closed, and every nonzero prime ideal of 𝒜 is in fact maximal. (The last condition is the important one.)

Here there’s one new word I have to define for you, but we won’t make much use of it.

Definition 48.3.2. Let R be an integral domain and let K be its field of fractions. We say R is integrally closed if the only elements a K which are roots of monic polynomials in R are the elements of R (which are roots of the trivial xr polynomial).

The interesting condition in the definition of a Dedekind domain is the last one: prime ideals and maximal ideals are the same thing. The other conditions are just technicalities, but “primes are maximal” has real substance.

Example 48.3.3 (is a Dedekind domain)
The ring is a Dedekind domain. Note that

The case of interest is a ring 𝒪K in which we wish to do factorizing. We’re now going to show that for any number field K, the ring 𝒪K is a Dedekind domain. First, the boring part.

Proposition 48.3.4 (𝒪K integrally closed and Noetherian)
For any number field K, the ring 𝒪K is integrally closed and Noetherian.

Proof. Boring, but here it is anyways for completeness.

Since 𝒪K∼
=n, we get that it’s Noetherian.

Now we show that 𝒪K is integrally closed. Suppose that η K is the root of some polynomial with coefficients in 𝒪K. Thus

ηn = αn−1 ⋅ηn−1 + αn−2 ⋅ ηn−2 + ⋅⋅⋅+ α0

where αi ∈𝒪K. We want to show that η ∈𝒪K as well.

Well, from the above, 𝒪K[η] is finitely generated…thus [η] ⊆ 𝒪K[η] is finitely generated. So η , and hence η K = 𝒪K. □

Now let’s do the fun part. We’ll prove a stronger result, which will re-appear repeatedly.

Theorem 48.3.5 (Important: prime ideals divide rational primes)
Let 𝒪K be a ring of integers and 𝔭 a nonzero prime ideal inside it. Then 𝔭 contains a rational prime p. Moreover, 𝔭 is maximal.

Proof. Take any α0 in 𝔭. Its Galois conjugates are algebraic integers so their product N(α)∕α is in 𝒪K (even though each individual conjugate need not be in K). Consequently, N(α) 𝔭, and we conclude 𝔭 contains some integer.

Then take the smallest positive integer in 𝔭, say p. We must have that p is a rational prime, since otherwise 𝔭 p = xy implies one of x,y 𝔭. This shows the first part.

We now do something pretty tricky to show 𝔭 is maximal. Look at 𝒪K𝔭; since 𝔭 is prime it’s supposed to be an integral domain… but we claim that it’s actually finite! To do this, we forget that we can multiply on 𝒪K. Recalling that 𝒪K∼=n as an abelian group, we obtain a map

𝔽p⊕n ∼= 𝒪K  ∕(p) ↠ 𝒪K  ∕𝔭.

Hence |𝒪K ∕𝔭| pn is finite. Since finite integral domains are fields (?? ) we are done. □

Since every nonzero prime 𝔭 is maximal, we now know that 𝒪K is a Dedekind domain. Note that this tricky proof is essentially inspired by the solution to ?? .

48.4  Unique factorization works

Okay, I’ll just say it now!

Unique factorization works perfectly in Dedekind domains!

Theorem 48.4.1 (Prime factorization works)
Let 𝔞 be a nonzero proper ideal of a Dedekind domain 𝒜. Then 𝔞 can be written as a finite product of nonzero prime ideals 𝔭i, say

     e  e     eg
𝔞 = 𝔭11𝔭22 ...𝔭 g

and this factorization is unique up to the order of the 𝔭i.

Moreover, 𝔞 divides 𝔟 if and only if for every prime ideal 𝔭, the exponent of 𝔭 in 𝔞 is less than the corresponding exponent in 𝔟.

I won’t write out the proof, but I’ll describe the basic method of attack. Section 3 of [?] does a nice job of explaining it. When we proved the fundamental theorem of arithmetic, the basic plot was:

(1)
Show that if p is a rational prime1 then pbc means pb or pc. (This is called Euclid’s Lemma.)
(2)
Use strong induction to show that every N > 1 can be written as the product of primes (easy).
(3)
Show that if p1pm = q1qn for some primes (not necessarily unique), then p1 = qi for some i, say q1.
(4)
Divide both sides by p1 and use induction.

What happens if we try to repeat the proof here? We get step 1 for free, because we’re using a better definition of “prime”. We can also do step 3, since it follows from step 1. But step 2 doesn’t work, because for abstract Dedekind domains we don’t really have a notion of size. And step 4 doesn’t work because we don’t yet have a notion of what the inverse of a prime ideal is.

Well, it turns out that we can define the inverse 𝔞1 of an ideal, and I’ll do so by the end of this chapter. You then need to check that 𝔞 𝔞1 = (1) = 𝒜. In fact, even this isn’t easy. You have to check it’s true for prime ideals 𝔭, then prove prime factorization, and then prove that this is true. Moreover, 𝔞1 is not actually an ideal, so you need to work in the field of fractions K instead of 𝒜.

So the main steps in the new situation are as follows:

(1)
First, show that every ideal 𝔞 divides 𝔭1𝔭g for some finite collection of primes. (This is an application of Zorn’s Lemma.)
(2)
Define 𝔭1 and show that 𝔭𝔭1 = (1).
(3)
Show that a factorization exists (again using Zorn’s Lemma).
(4)
Show that it’s unique, using the new inverse we’ve defined.

Finally, let me comment on how nice this is if 𝒜 is a PID (like ). Thus every element a ∈𝒜 is in direct correspondence with an ideal (a). Now suppose (a) factors as a product of ideals 𝔭i = (pi), say,

(a) = (p1)e1(p2)e2 ...(pn)en.

This verbatim reads

       e1 e2    en
a = up 1 p2 ...pn

where u is some unit (recall ?? ). Hence, Dedekind domains which are PID’s satisfy unique factorization for elements, just like in . (In fact, the converse of this is true.)

48.5  The factoring algorithm

Let’s look at some examples from quadratic fields. Recall that if K = (√d-), then

      {   √ --
𝒪K  =   ℤ[[ d]√-]  d ≡ 2,3  (mod  4)
        ℤ  1+2d-  d ≡ 1  (mod  4).

Also, recall that the norm of a + b√−-d is given by a2 + db2.

Example 48.5.1 (Factoring 6 in the integers of (√ ---
  − 5))
Let 𝒪K = [√ ---
  − 5] arise from K = (√ −-5). We’ve already seen that

               (    √---)(    √ --)
(6) = (2) ⋅(3 ) = 1 + − 5  1 −   − 5

and you can’t get any further with these principal ideals. But let

𝔭 = (1+  √−-5,2) = (1− √ −-5,2)  and  𝔮  = (1+  √−-5,3), 𝔮 = (1− √ −-5,3).
                                       1                 2

Then it turns out (6) = 𝔭2𝔮1𝔮2. More specifically, (2) = 𝔭2, (3) = 𝔮1𝔮2, and (1+√---
 − 5) = 𝔭𝔮1 and (1 √ ---
  − 5) = 𝔭𝔮2. (Proof in just a moment.)

I want to stress that all our ideals are computed relative to 𝒪K. So for example,

(2) = {2x | x ∈ 𝒪K } .

How do we know in this example that 𝔭 is prime/maximal? (Again, these are the same since we’re in a Dedekind domain.) Answer: look at 𝒪K𝔭 and see if it’s a field. There is a trick to this: we can express

        √ --- ∼        2
𝒪K  = ℤ [ − 5]= ℤ [x]∕(x + 5).

So when we take that mod 𝔭, we get that

𝒪  ∕𝔭 = ℤ [x]∕(x2 + 5,2,1 + x) ∼= 𝔽 [x ]∕(x2 + 5,x+ 1)
  K                             2

as rings.

Question 48.5.2. Conclude that 𝒪K𝔭∼
=𝔽2, and satisfy yourself that 𝔮1 and 𝔮2 are also maximal.

I should give an explicit example of an ideal multiplication: let’s compute

𝔮1𝔮2 = (     √---     √ ---       √ ---      √ ---   )
 (1 +  − 5)(1−   − 5),3(1 +   − 5),3(1−  − 5),9
= (       √ ---     √---  )
 6,3 + 3  − 5,3 − 3 − 5,9
= (       √ ---     √---  )
 6,3 + 3  − 5,3 − 3 − 5,3
= (3)

where we first did 9 6 = 3 (think Euclidean algorithm!), then noted that all the other generators don’t contribute anything we don’t already have with the 3 (again these are ideals computed in 𝒪K). You can do the computation for 𝔭2, 𝔭𝔮1, 𝔭𝔮2 in the same way.

Finally, it’s worth pointing out that we should quickly verify that 𝔭(x) for some x; in other words, that 𝔭 is not principal. Assume for contradiction that it is. Then x divides both 1 + √ ---
  − 5 and 2, in the sense that 1 + √ ---
  − 5 = α1x and 2 = α2x for some α12 ∈𝒪K. (Principal ideals are exactly the “multiples” of x, so (x) = x𝒪K.) Taking the norms, we find that NK∕(x) divides both

          √ ---
NK ∕ℚ(1 +   − 5) = 6 and   NK ∕ℚ(2) = 4.

Since 𝔭(1), x cannot be a unit, so its norm must be 2. But there are no elements of norm 2 = a2 + 5b2 in 𝒪K.

Example 48.5.3 (Factoring 3 in the integers of (√ ----
  − 17))
Let 𝒪K = [√----
 − 17] arise from K = (√ ----
  − 17). We know 𝒪K∼
=[x](x2 + 17). Now

         ∼          2      ∼         2
𝒪K  ∕3𝒪K =  ℤ[x]∕(3,x  + 17)=  𝔽3[x]∕(x −  1).

This already shows that (3) cannot be a prime (i.e. maximal) ideal, since otherwise our result should be a field. Anyways, we have a projection

𝒪   ↠ 𝔽 [x]∕((x−  1)(x + 1)).
 K      3

Let 𝔮1 be the pre-image of (x 1) in the image, that is,

       √ ----
𝔮1 = (3, − 17− 1).

Similarly,

       √ ----
𝔮2 = (3, − 17+ 1).

We have 𝒪K𝔮1∼
=𝔽3, so 𝔮1 is maximal (prime). Similarly 𝔮2 is prime. Magically, you can check explicitly that

𝔮1𝔮2 = (3).

Hence this is the factorization of (3) into prime ideals.

The fact that 𝔮1𝔮2 = (3) looks magical, but it’s really true:

𝔮1𝔮2 = (3,√−-17 1)(3,√ −-17 + 1)
= (9,3√----
 − 17 + 3,3√----
 − 17 3,18)
= (9,3√----
 − 17 + 3,6)
= (3,3√----
 − 17 + 3,6)
= (3).

In fact, it turns out this always works in general: given a rational prime p, there is an algorithm to factor p in any 𝒪K of the form [𝜃].

Theorem 48.5.4 (Factoring algorithm / Dedekind-Kummer theorem)
Let K be a number field. Let 𝜃 ∈𝒪K with [𝒪K : [𝜃]] = j < , and let p be a prime not dividing j. Then (p) = p𝒪K is factored as follows:

Let f be the minimal polynomial of 𝜃 and factor f mod p as

     g
f-≡ ∏  (f-)ei  (mod  p).
    i=1   i

Then 𝔭i = (fi(𝜃),p) is prime for each i and the factorization of (p) is

            g
           ∏   ei
𝒪K  ⊇ (p) =   𝔭i .
           i=1

In particular, if K is monogenic with 𝒪K = [𝜃] then j = 1 and the theorem applies for all primes p.

In almost all our applications in this book, K will be monogenic; i.e. j = 1. Here ψ denotes the image in 𝔽p[x] of a polynomial ψ [x].

Question 48.5.5. There are many possible pre-images fi we could have chosen (for example if fi = x2 + 1 (mod 3), we could pick fi = x2 + 3x + 7.) Why does this not affect the value of 𝔭i?

Note that earlier, we could check the factorization worked for any particular case. The proof that this works is much the same, but we need one extra tool, the ideal norm. After that we leave the proposition as ?? .

This algorithm gives us a concrete way to compute prime factorizations of (p) in any monogenic number field with 𝒪K = [𝜃]. To summarize the recipe:

1.
Find the minimal polynomial of 𝜃, say f [x].
2.
Factor f mod p into irreducible polynomials f1e1f2e2fgeg.
3.
Compute 𝔭i = (fi(𝜃),p) for each i.

Then your (p) = 𝔭1e1𝔭geg.

Exercise 48.5.6. Factor (29) in [i] using the above algorithm.

48.6  Fractional ideals

Prototypical example for this section: Analog to for , allowing us to take inverses of ideals. Prime factorization works in the nicest way possible.

We now have a neat theory of factoring ideals of 𝒜, just like factoring the integers. Now note that our factorization of naturally gives a way to factor elements of ; just factor the numerator and denominator separately.

Let’s make the analogy clearer. The analogue of a rational number is as follows.

Definition 48.6.1. Let 𝒜 be a Dedekind domain with field of fractions K. A fractional ideal J of K is a set of the form

     1
J =  x-⋅𝔞  where x ∈ 𝒜, and 𝔞 is an integral ideal.

For emphasis, ideals of 𝒜 will be sometimes referred to as integral ideals.

You might be a little surprised by this definition: one would expect that a fractional ideal should be of the form 𝔞𝔟 for some integral ideals 𝔞, 𝔟. But in fact, it suffices to just take x ∈𝒜 in the denominator. The analogy is that when we looked at 𝒪K, we found that we only needed integer denominators: --1√--
4− 3 = -1
13(4 + √ --
  3). Similarly here, it will turn out that we only need to look at 1
x 𝔞 rather than 𝔞
𝔟, and so we define it this way from the beginning. See ??  for a different equivalent definition.

Example 48.6.2 (5
2is a fractional ideal)
The set

      {          }
5ℤ =   5-n | n ∈ ℤ = 1-(5)
2      2             2

is a fractional ideal of .

Now, as we prescribed, the fractional ideals form a multiplicative group:

Theorem 48.6.3 (Fractional ideals form a group)
Let 𝒜 be a Dedekind domain and K its field of fractions. For any integral ideal 𝔞, the set

𝔞−1 = {x ∈ K | x𝔞 ⊆ (1) = 𝒜}

is a fractional ideal with 𝔞𝔞1 = (1).

Definition 48.6.4. Thus nonzero fractional ideals of K form a group under multiplication with identity (1) = 𝒜. This ideal group is denoted JK.

Example 48.6.5 ((3)1 in )
Please check that in we have

        { 1        }    1
(3)−1 =   -n | n ∈ ℤ =  -ℤ.
          3             3

It follows that every fractional ideal J can be uniquely written as

J = ∏  𝔭ni ⋅∏  𝔮−mi
        i       i
     i

where ni and mi are positive integers. In fact, 𝔞 is an integral ideal if and only if all its exponents are nonnegative, just like the case with integers. So, a perhaps better way to think about fractional ideals is as products of prime ideals, possibly with negative exponents.

48.7  The ideal norm

One last tool is the ideal norm, which gives us a notion of the “size” of an ideal.

Definition 48.7.1. The ideal norm (or absolute norm) of a nonzero ideal 𝔞 ⊆𝒪K is defined as |𝒪K ∕𝔞| and denoted N(𝔞).

Example 48.7.2 (Ideal norm of (5) in the Gaussian integers)
Let K = (i), 𝒪K = [i]. Consider the ideal (5) in 𝒪K. We have that

𝒪K ∕(5) ∼= {a+  bi | a,b ∈ ℤ ∕5ℤ}

so (5) has ideal norm 25, corresponding to the fact that 𝒪K(5) has 52 = 25 elements.

Example 48.7.3 (Ideal norm of (2 + i) in the Gaussian integers)
You’ll notice that

𝒪K ∕(2 + i) ∼= 𝔽5

since mod 2 + i we have both 5 0 and i ≡−2. (Indeed, since (2 + i) is prime we had better get a field!) Thus N((2+ i)) = 5; similarly N((2 − i)) = 5.

Thus the ideal norm measures how “roomy” the ideal is: that is, (5) is a lot more spaced out in [i] than it is in . (This intuition will be important when we will actually view 𝒪K as a lattice.)

Question 48.7.4. What are the ideals with ideal norm one?

Our example with (5) suggests several properties of the ideal norm which turn out to be true:

Lemma 48.7.5 (Properties of the absolute norm)
Let 𝔞 be a nonzero ideal of 𝒪K.

(a)
N(𝔞) is finite.
(b)
For any other nonzero ideal 𝔟, N(𝔞𝔟) = N(𝔞)N(𝔟).
(c)
If 𝔞 = (a) is principal, then N(𝔞) = NK∕(a).

I unfortunately won’t prove these properties, though we already did (a) in our proof that 𝒪K was a Dedekind domain.

The fact that N is completely multiplicative lets us also consider the norm of a fractional ideal J by the natural extension

    ∏      ∏                         ∏       ni
J =    𝔭nii⋅   𝔮−i mi   =⇒     N(J) :=  ∏-iN(𝔭i)m--.
     i      i                          iN(𝔮i) i

Thus N is a natural group homomorphism JK ∖{0}.

48.8  A few harder problems to think about

Problem 48A. Show that there are three different factorizations of 77 in 𝒪K, where K = (√ ----
  − 13).

Problem 48B. Let K = (√ --
3 2); take for granted that 𝒪K = [√ --
 32]. Find the factorization of (5) in 𝒪K.

Problem 48C (Fermat’s little theorem). Let 𝔭 be a prime ideal in some ring of integers 𝒪K. Show that for α ∈𝒪K,

αN (𝔭) ≡ α  (mod  𝔭).

Problem 48D. Let 𝒜 be a Dedekind domain with field of fractions K, and pick J K. Show that J is a fractional ideal if and only if

(i)
J is closed under addition and multiplication by elements of 𝒜, and
(ii)
J is finitely generated as an abelian group.

More succinctly: J is a fractional ideal J is a finitely generated 𝒜-module.

Problem 48E. In the notation of ?? , let I = i=1g𝔭iei. Assume for simplicity that K is monogenic, hence 𝒪K = [𝜃].

(a)
Prove that each 𝔭i is prime.
(b)
Show that (p) divides I.
(c)
Use the norm to show that (p) = I.

49  Minkowski bound and class groups

We now have a neat theory of unique factorization of ideals. In the case of a PID, this in fact gives us a UFD. Sweet.

We’ll define, in a moment, something called the class group which measures how far 𝒪K is from being a PID; the bigger the class group, the farther 𝒪K is from being a PID. In particular, the 𝒪K is a PID if it has trivial class group.

Then we will provide some inequalities which let us put restrictions on the class group; for instance, this will let us show in some cases that the class group must be trivial. Astonishingly, the proof will use Minkowski’s theorem, a result from geometry.

49.1  The class group

Prototypical example for this section: PID’s have trivial class group.

Let K be a number field, and let JK denote the multiplicative group of fractional ideals of 𝒪K. Let PK denote the multiplicative group of principal fractional ideals: those of the form (x) = x𝒪K for some x K.

Question 49.1.1. Check that PK is also a multiplicative group. (This is really easy: name x𝒪K y𝒪K and (x𝒪K)1.)

As JK is abelian, we can now define the class group to be the quotient

ClK := JK ∕PK .

The elements of ClK are called classes.

Equivalently,

The class group ClK is the set of nonzero fractional ideals modulo scaling by a constant in K.

In particular, ClK is trivial if all ideals are principal, since the nonzero principal ideals are the same up to scaling.

The size of the class group is called the class number. It’s a beautiful theorem that the class number is always finite, and the bulk of this chapter will build up to this result. It requires several ingredients.

49.2  The discriminant of a number field

Prototypical example for this section: Quadratic fields.

Let’s say I have K = (√2--). As we’ve seen before, this means 𝒪K = [√2--], meaning

      {     √ --       }
𝒪K  =  a + b  2 | a,b ∈ ℤ .

The key insight now is that you might think of this as a lattice: geometrically, we want to think about this the same way we think about 2.

Perversely, we might try to embed this into 2 by sending a + b√--
 2 to (a,b). But this is a little stupid, since we’re rudely making K, which somehow lives inside and is “one-dimensional” in that sense, into a two-dimensional space. It also depends on a choice of basis, which we don’t like. A better way is to think about the fact that there are two embeddings σ1 : K and σ2 : K , namely the identity, and conjugation:

σ1(a + b√ --
  2) = a + b√--
 2
σ2(a + b√ --
  2) = a b√--
 2.

Fortunately for us, these embeddings both have real image. This leads us to consider the set of points

(σ1(α ),σ2 (α )) ∈ ℝ2 as  α ∈ K.

This lets us visualize what 𝒪K looks like in 2. The points of K are dense in 2, but the points of 𝒪K cut out a lattice.

To see how big the lattice is, we look at how {1,√--
 2}, the generators of 𝒪K, behave. The point corresponding to a + b√ --
  2 in the lattice is

a ⋅(1,1)+ b ⋅(√2,-− √2-).

The mesh of the lattice1 is defined as the hypervolume of the “fundamental parallelepiped” I’ve colored blue above. For this particular case, it ought to be equal to the area of that parallelogram, which is

   [     √-]
det 1  −√ 2- = 2√2.-
    1     2

The definition of the discriminant is precisely this, except with an extra square factor (since permutation of rows could lead to changes in sign in the matrix above). ??  shows that the squaring makes ΔK an integer.

To make the next definition, we invoke:

Theorem 49.2.1 (The n embeddings of a number field)
Let K be a number field of degree n. Then there are exactly n field homomorphisms K`→, say σ1,n, which fix .

Proof. Deferred to ?? , once we have the tools of Galois theory. □

In fact, in ??  we see that for α K, we have that σi(α) runs over the conjugates of α as i = 1,,n. It follows that

             n                         n
Tr    (α) = ∑  σ (α)  and   N    (α) = ∏  σ(α ).
   K∕ℚ      i=1  i            K ∕ℚ      i=1  i

This allows us to define:

Definition 49.2.2. Suppose α1,n is a -basis of 𝒪K. The discriminant of the number field K is defined by

          ⌊                   ⌋ 2
           σ1(α1)  ...  σn(α1)
ΔK  := det |⌈   ..    ...     ..  |⌉  .
              .            .
           σ1(αn)  ...  σn(αn)

This does not depend on the choice of the {αi}; we will not prove this here.

Example 49.2.3 (Discriminant of K = (√ --
  2))
We have 𝒪K = [√ --
  2] and as discussed above the discriminant is

          √ --
ΔK  =  (− 2  2)2 = 8.

Example 49.2.4 (Discriminant of (i))
Let K = (i). We have 𝒪K = [i] = i. The embeddings are the identity and complex conjugation which take 1 to (1,1) and i to (i,i). So

          [     ]2
           1  1           2
ΔK  =  det i  − i  = (− 2i) = − 4.

This example illustrates that the discriminant need not be positive for number fields which wander into the complex plane (the lattice picture is a less perfect analogy). But again, as we’ll prove in the problems the discriminant is always an integer.

Example 49.2.5 (Discriminant of (√ --
  5))
Let K = (√ --
  5). This time, 𝒪K = 1+ √5
--2--, and so the discriminant is going to look a little bit different. The embeddings are still a + b√ --
  5↦→a + b√ --
  5,a b√--
 5.

Applying this to the -basis {     √-}
 1, 1+--5-
     2, we get

         [           ]2
            1√ -    1√-        √ -2
ΔK  = det 1+--5  1−-5-  = (−   5) = 5.
            2      2

Exercise 49.2.6. Extend all this to show that if K = (√ -
  d) for d1 squarefree, we have

      {
Δ   =   d  if d ≡ 1 (mod  4)
  K     4d if d ≡ 2,3 (mod 4).

Actually, let me point out something curious: recall that the polynomial discriminant of Ax2 + Bx + C is B2 4AC. Then:

This is not a coincidence! ??  asserts that this is true in general; hence the name “discriminant”.

49.3  The signature of a number field

Prototypical example for this section: ( √ --
100 2) has signature (2,49).

In the example of K = (i), we more or less embedded K into the space . However, K is a degree two extension, so what we’d really like to do is embed it into 2. To do so, we’re going to take advantage of complex conjugation.

Let K be a number field and σ1,n be its embeddings. We distinguish between the real embeddings (which map all of K into ) and the complex embeddings (which map some part of K outside ). Notice that if σ is a complex embedding, then so is the conjugate σσ; hence complex embeddings come in pairs.

Definition 49.3.1. Let K be a number field of degree n, and set

r1 = number of real embeddings
r2 = number of pairs of complex embeddings.

The signature of K is the pair (r1,r2). Observe that r1 + 2r2 = n.

Example 49.3.2 (Basic examples of signatures)

(a)
has signature (1,0).
(b)
(√2--) has signature (2,0).
(c)
(i) has signature (0,1).
(d)
Let K = (3√2--), and let ω be a cube root of unity. The elements of K are
     {                         }
K  =  a + b 3√2-+ c√34-| a,b,c ∈ ℚ .

Then the signature is (1,1), because the three embeddings are

    √3--  √3--      √3--  √3--       √3--  √3--
σ1 :  2 ↦→   2,  σ2 :  2 ↦→   2ω,  σ3 :  2 ↦→   2ω2.

The first of these is real and the latter two are conjugate pairs.

Example 49.3.3 (Even more signatures)
In the same vein (√ --
992) and ( √ --
1002) have signatures (1,49) and (2,49).

Question 49.3.4. Verify the signatures of the above two number fields.

From now on, we will number the embeddings of K in such a way that

σ1,σ2,...,σr
            1

are the real embeddings, while

σ    = σ-------,  σ     = σ------,  ...,  σ     =  σ-----.
 r1+1     r1+r2+1    r1+2    r1+r2+2          r1+r2    r1+2r2

are the r2 pairs of complex embeddings. We define the canonical embedding of K as

   ι   r1    r2
K `→  ℝ   × ℂ     by  α ↦→  (σ1 (α ),...,σr1(α),σr1+1(α ),...,σr1+r2(α)).

All we’ve done is omit, for the complex case, the second of the embeddings in each conjugate pair. This is no big deal, since they are just conjugates; the above tuple is all the information we need.

For reasons that will become obvious in a moment, I’ll let τ denote the isomorphism

τ : ℝr1 × ℂr2−∼→ ℝr1+2r2 = ℝn

by breaking each complex number into its real and imaginary part, as

α↦→(σ1(α),r1(α),
Reσr1+1(α), Imσr1+1(α),
Reσr1+2(α), Imσr1+2(α),
,
Reσr1+r2(α), Imσr1+r2(α)).

Example 49.3.5 (Example of canonical embedding)
As before let K = (√3--
  2) and set

σ1 :√32-↦→  3√2,  σ2 :√32--↦→ √32-ω,   σ3 : 3√2-↦→ 3√2-ω2

where ω = 1
2 +√-
-3-
2i, noting that we’ve already arranged indices so σ1 = id is real while σ2 and σ3 are a conjugate pair. So the embeddings K ι
`→ ×  ∼
−→3 are given by

   ι                τ
α ↦−→  (σ1(α),σ2(α))↦−→  (σ1(α), Re σ2(α), Im σ2(α )).

For concreteness, taking α = 9 + √ --
3 2 gives

9 + 3√2--↦→ι (                 )
  9+ √32,- 9 + 3√2-ω
= (                          )
     √ --     1√ --  6√108--
  9+  32, 9 − --32 + -----i
              2        2×
↦→τ(                         )
      √--     1 √ -- 6√108--
  9+  32,  9− --3 2, -----
              2        23.

Now, the whole point of this is that we want to consider the resulting lattice when we take 𝒪K. In fact, we have:

Lemma 49.3.6
Consider the composition of the embeddings K`→r1 × r2∼
−→n. Then as before, 𝒪K becomes a lattice L in n, with mesh equal to

   ∘ -----
-1-  |ΔK  |.
2r2

Proof. Fun linear algebra problem (you just need to manipulate determinants). Left as ?? . □

From this we can deduce:

Lemma 49.3.7
Consider the composition of the embeddings K`→r1 × r2∼
−→n. Let 𝔞 be an ideal in 𝒪K. Then the image of 𝔞 is a lattice L𝔞 in n with mesh equal to

    ∘  -----
N-(𝔞)-  |ΔK |.
 2r2

Sketch of Proof. Let

d = N (𝔞) := [𝒪K  : 𝔞].

Then in the lattice L𝔞, we somehow only take 1dth of the points which appear in the lattice L, which is why the area increases by a factor of N(𝔞). To make this all precise I would need to do a lot more with lattices and geometry than I have space for in this chapter, so I will omit the details. But I hope you can see why this is intuitively true. □

49.4  Minkowski’s theorem

Now I can tell you why I insisted we move from r1 × r2 to n. In geometry, there’s a really cool theorem of Minkowski’s that goes as follows.

Theorem 49.4.1 (Minkowski)
Let S n be a convex set containing 0 which is centrally symmetric (meaning that x S x S). Let L be a lattice with mesh d. If either

(a)
The volume of S exceeds 2nd, or
(b)
The volume of S equals 2nd and S is compact,

then S contains a nonzero lattice point of L.

Question 49.4.2. Show that the condition 0 S is actually extraneous in the sense that any nonempty, convex, centrally symmetric set contains the origin.

Sketch of Proof. Part (a) is surprisingly simple and has a very olympiad-esque solution: it’s basically Pigeonhole on areas. We’ll prove part (a) in the special case n = 2, L = 2 for simplicity as the proof can easily be generalized to any lattice and any n. Thus we want to show that any such convex set S with area more than 4 contains a lattice point.

Dissect the plane into 2 × 2 squares

[2a− 1,2a + 1]× [2b− 1,2b + 1]

and overlay all these squares on top of each other. By the Pigeonhole Principle, we find there exist two points pq S which map to the same point. Since S is symmetric, q S. Then 12(p q) S (convexity) and is a nonzero lattice point.

I’ll briefly sketch part (b): the idea is to consider (1+𝜀)S for 𝜀 > 0 (this is “S magnified by a small factor 1+𝜀”). This satisfies condition (a). So for each 𝜀 > 0 the set of nonzero lattice points in (1+𝜀)S, say S𝜀, is a finite nonempty set of (discrete) points (the “finite” part follows from the fact that (1+𝜀)S is bounded). So there has to be some point that’s in S𝜀 for every 𝜀 > 0 (why?), which implies it’s in S. □

49.5  The trap box

The last ingredient we need is a set to apply Minkowski’s theorem to. I propose:

Definition 49.5.1. Let M be a positive real. In r1 × r2, define the box S to be the set of points (x1,,xr1,z1,,zr2) such that

r1         r2
∑         ∑
   |xi|+  2   |zj| ≤ M.
i=1        j=1

Note that this depends on the value of M.

Think of this box as a mousetrap: anything that falls in it is going to have a small norm, and our goal is to use Minkowski to lure some nonzero element into it.

That is, suppose α 𝔞 falls into the box I’ve defined above, which means

      r1           r1+r2          n
M  ≥ ∑   |σ (α)|+ 2  ∑   |σ (α )| = ∑ |σ (α)|,
          i               i           i
     i=1          i=r1+1         i=1

where we are remembering that the last few σ’s come in conjugate pairs. This looks like the trace, but the absolute values are in the way. So instead, we apply AM-GM to obtain:

Lemma 49.5.2 (Effect of the mousetrap)
Let α ∈ 𝒪K, and suppose ι(α) is in S (where ι : K`→r1 × r2 as usual). Then

           n          (   )n
N    (α ) = ∏ |σ (α)| ≤  M--  .
 K∕ℚ           i        n
          i=1

The last step we need to do is compute the volume of the box. This is again some geometry I won’t do, but take my word for it:

Lemma 49.5.3 (Size of the mousetrap)
Let τ : r1 × r2∼
−→n as before. Then the image of S under τ is a convex, compact, centrally symmetric set with volume

 r  (π )r2 M  n
2 1 ⋅ 2   ⋅-n!-.

Question 49.5.4. (Sanity check) Verify that the above is correct for the signatures (r1,r2) = (2,0) and (r1,r2) = (0,1), which are the possible signatures when n = 2.

49.6  The Minkowski bound

We can now put everything we have together to obtain the great Minkowski bound.

Theorem 49.6.1 (Minkowski bound)
Let 𝔞 ⊆𝒪K be any nonzero ideal. Then there exists 0α 𝔞 such that

           (  )
             4- r2 n!∘ -----
NK ∕ℚ(α) ≤   π    nn  |ΔK |⋅N (𝔞).

Proof. This is a matter of putting all our ingredients together. Let’s see what things we’ve defined already:

SVG-Viewer needed.

Pick a value of M such that the mesh of L𝔞 equals 2n of the volume of the box. Then Minkowski’s theorem gives that some 0α 𝔞 lands inside the box — the mousetrap is configured to force NK∕(α) -1
nnMn. The correct choice of M is

  n     n   n  mesh      n      n!      −r ∘ -----
M  =  M  ⋅2  ⋅vol-box = 2  ⋅-r1-(π)r2-⋅2  2  |ΔK  |N(𝔞)
                            2  ⋅ 2

which gives the bound after some arithmetic. □

49.7  The class group is finite

Definition 49.7.1. Let MK = (  )
 π4r2nn!n∘ -----
  |ΔK | for brevity. Note that it is a constant depending on K.

So that’s cool and all, but what we really wanted was to show that the class group is finite. How can the Minkowski bound help? Well, you might notice that we can rewrite it to say

  (     −1)
N  (α )⋅𝔞    ≤ MK

where MK is some constant depending on K, and α 𝔞.

Question 49.7.2. Show that (α) 𝔞1 is an integral ideal. (Unwind definitions.)

But in the class group we mod out by principal ideals like (α). If we shut our eyes for a moment and mod out, the above statement becomes “N(𝔞1) MK”. The precise statement of this is

Corollary 49.7.3
Let K be a number field, and pick a fractional ideal J. Then we can find α such that 𝔟 = (α) J is integral and N(𝔟) MK.

Proof. For fractional ideals I and J write I J to mean that I = (α)J for some α; then ClK is just modding out by . Let J be a fractional ideal. Then J1 is some other fractional ideal. By definition, for some α ∈𝒪K we have that αJ1 is an integral ideal 𝔞. The Minkowski bound tells us that for some x 𝔞, we have N(x𝔞1) MK. But x𝔞1 𝔞1 = (αJ1)1 J. □

Corollary 49.7.4 (Finiteness of class group)
Class groups are always finite.

Proof. For every class in ClK, we can identify an integral ideal 𝔞 with norm less than MK. We just have to show there are finitely many such integral ideals; this will mean there are finitely many classes.

Suppose we want to build such an ideal 𝔞 = 𝔭1e1𝔭mem. Recall that a prime ideal 𝔭i must have some rational prime p inside it, meaning 𝔭i divides (p) and p divides N(𝔭i). So let’s group all the 𝔭i we want to build 𝔞 with based on which (p) they came from.

To be more dramatic: imagine you have a cherry tree; each branch corresponds to a prime (p) and contains as cherries (prime ideals) the factors of (p) (finitely many). Your bucket (the ideal 𝔞 you’re building) can only hold a total weight (norm) of MK. So you can’t even touch the branches higher than MK. You can repeat cherries (oops), but the weight of a cherry on branch (p) is definitely p; all this means that the number of ways to build 𝔞 is finite. □

49.8  Computation of class numbers

Definition 49.8.1. The order of ClK is called the class number of K.

Remark 49.8.2 — If ClK = 1, then 𝒪K is a PID, hence a UFD.

By computing the actual value of MK, we can quite literally build the entire “cherry tree” mentioned in the previous proof. Let’s give an example how!

Proposition 49.8.3
The field (√ ----
  − 67) has class number 1.

Proof. Since K = (√ −-67) has signature (0,1) and discriminant ΔK = 67 (since 67 1 (mod 4)) we can compute

      ( 4)1   2!√---
MK  =   π-  ⋅ 22 67 ≈ 5.2.

That means we can cut off the cherry tree after (2), (3), (5), since any cherries on these branches will necessarily have norm MK. We now want to factor each of these in 𝒪K = [𝜃], where 𝜃 =   √ ---
1+-2−67 has minimal polynomial x2 x + 17. But something miraculous happens:

It’s our lucky day; all of the ideals (2), (3), (5) are prime (already principal). To put it another way, each of the three branches has only one (large) cherry on it. That means any time we put together an integral ideal with norm MK, it is actually principal. In fact, these guys have norm 4, 9, 25 respectively…so we can’t even touch (3) and (5), and the only ideals we can get are (1) and (2) (with norms 1 and 4).

Now we claim that’s all. Pick a fractional ideal J. By ?? , we can find an integral ideal 𝔟 J with N(𝔟) MK. But by the above, either 𝔟 = (1) or 𝔟 = (2), both of which are principal, and hence trivial in ClK. So J is trivial in ClK too, as needed. □

Let’s do a couple more.

Theorem 49.8.4 (Gaussian integers [i] form a UFD)
The field (i) has class number 1.

Proof. This is 𝒪K where K = (i), so we just want ClK to be trivial. We have MK = 2
π√ --
  4 < 2. So every class has an integral ideal of norm 𝔟 satisfying

       (  )1
N(𝔟) ≤   4-  ⋅ 2! ⋅√4-= 4-< 2.
         π    22        π

Well, that’s silly: we don’t have any branches to pick from at all. In other words, we can only have 𝔟 = (1). □

Here’s another example of something that still turns out to be unique factorization, but this time our cherry tree will actually have cherries that can be picked.

Proposition 49.8.5 ([√7--] is a UFD)
The field (√7--) has class number 1.

Proof. First we compute the Minkowski bound.

Question 49.8.6. Check that MK 2.646.

So this time, the only branch is (2). Let’s factor (2) as usual: the polynomial x2 + 7 reduces as (x 1)(x + 1) (mod 2), and hence

      (  √ --   )(  √ --   )
(2) =  2,  7− 1   2,  7 + 1 .

Oops! We now have two cherries, and they both seem reasonable. But actually, I claim that

(  √ --   )   (   √ -)
 2,  7−  1 =   3−   7  .

Question 49.8.7. Prove this.

So both the cherries are principal ideals, and as before we conclude that ClK is trivial. But note that this time, the prime ideal (2) actually splits; we got lucky that the two cherries were principal but this won’t always work. □

How about some nontrivial class groups? First, we use a lemma that will help us with narrowing down the work in our cherry tree.

Lemma 49.8.8 (Ideals divide their norms)
Let 𝔟 be an integral ideal with N(𝔟) = n. Then 𝔟 divides the ideal (n).

Proof. By definition, n = |𝒪K ∕𝔟|. Treating 𝒪K𝔟 as an (additive) abelian group and using Lagrange’s theorem, we find

0 ≡ α + ⋅⋅⋅+ α = nα   (mod 𝔟)    for all α ∈ 𝒪 .
    ◟---◝◜---◞                               K
       n times

Thus (n) 𝔟, done. □

Now we can give such an example.

Proposition 49.8.9 (Class group of (  ----
√ − 17))
The number field K = (  ----
√ − 17) has class group 4.

You are not obliged to read the entire proof in detail, as it is somewhat gory. The idea is just that there are some cherries which are not trivial in the class group.

Proof. Since ΔK = 68, we compute the Minkowski bound

        √---
MK  = 4- 17 < 6.
      π

Now, it suffices to factor with (2), (3), (5). The minimal polynomial of √ ----
  − 17 is x2 + 17, so as usual

(2) = (2,√ ----
  − 17 + 1)2
(3) = (3,√ ----
  − 17 1)(3,√ ----
  − 17 + 1)
(5) = (5)

corresponding to the factorizations of x2 + 17 modulo each of 2, 3, 5. Set 𝔭 = (2,√ ----
  − 17 + 1) and 𝔮1 = (3,√ ----
  − 17 1), 𝔮2 = (3,√ ----
  − 17 + 1). We can compute

N (𝔭) = 2  and   N(𝔮1) = N(𝔮2) = 3.

In particular, they are not principal. The ideal (5) is out the window; it has norm 25. Hence, the three cherries are 𝔭, 𝔮1, 𝔮2.

The possible ways to arrange these cherries into ideals with norm 5 are

{              }
  (1),𝔭,𝔮1,𝔮2,𝔭2 .

However, you can compute

𝔭2 = (2)

so 𝔭2 and (1) are in the same class group; that is, they are trivial. In particular, the class group has order at most 4.

From now on, let [𝔞] denote the class (member of the class group) that 𝔞 is in. Since 𝔭 isn’t principal (so [𝔭][(1)]), it follows that 𝔭 has order two. So Lagrange’s theorem says that ClK has order either 2 or 4.

Now we claim [𝔮1]2[(1)], which implies that 𝔮1 has order greater than 2. If not, 𝔮12 is principal. We know N(𝔮1) = 3, so this can only occur if 𝔮12 = (3); this would force 𝔮1 = 𝔮2. This is impossible since 𝔮1 + 𝔮2 = (1).

Thus, 𝔮1 has even order greater than 2. So it has to have order 4. From this we deduce

ClK ∼= ℤ ∕4ℤ.                                -|

Remark 49.8.10 — When we did this at Harvard during Math 129, there was a five-minute interruption in which students (jokingly) complained about the difficulty of evaluating 4
π√ ---
  17. Excerpt:

“Will we be allowed to bring a small calculator on the exam?” – Student 1
“What does the size have to do with anything? You could have an Apple Watch” – Professor
“Just use the fact that π 3” – me
“Even [other professor] doesn’t know that, how are we supposed to?” – Student 2
“You have to do this yourself!” – Professor
“This is an outrage.” – Student 1

49.9  A few harder problems to think about

Problem 49A. Show that K = (√ -----
  − 163) has trivial class group, and hence 𝒪K = [ 1+ √−163]
  ---2---- is a UFD.2

Problem 49B. Determine the class group of (√ ----
  − 31).

Problem 49C (China TST 1998). Let n be a positive integer. A polygon in the plane (not necessarily convex) has area greater than n. Prove that one can translate it so that it contains at least n + 1 lattice points.

Problem 49D (?? ). Consider the composition of the embeddings K`→r1 × r2−∼→n. Show that the image of 𝒪K K has mesh equal to

-1-∘ |Δ---|.
2r2    K

Problem 49E. Let p 1 (mod 4) be a prime. Show that there are unique integers a > b > 0 such that a2 + b2 = p.

Problem 49F (Korea national olympiad 2014). Let p be an odd prime and k a positive integer such that pk2 + 5. Prove that there exist positive integers m, n such that p2 = m2 + 5n2.

50  More properties of the discriminant

I’ll remind you that the discriminant of a number field K is given by

          ⌊                    ⌋2
            σ1(α1)  ...  σn (α1 )
ΔK  := det|⌈   ...     ...    ...   |⌉

            σ1(αn )  ...  σn (αn )

where α1, …, αn is a -basis for K, and the σi are the n embeddings of K into .

Several examples, properties, and equivalent definitions follow.

50.1  A few harder problems to think about

Problem 50A (Discriminant of cyclotomic field). Let p be an odd rational prime and ζp a primitive pth root of unity. Let K = (ζp). Show that

          p−1 p−2
ΔK  = (− 1) 2 p  .

Problem 50B (Trace representation of ΔK).       PICLet α1, …, αn be a basis for 𝒪K. Prove that

         ⌊          2                                  ⌋
            TrK∕ℚ (α 1)   TrK∕ℚ(α1α2)  ...  TrK ∕ℚ (α1 αn)
         || TrK∕ℚ(α2α1 )   TrK∕ℚ(α22)   ...  TrK ∕ℚ (α2 αn)||
ΔK  = det|⌈        ...             ...      ...          ...   |⌉ .

           TrK∕ℚ(αn α1) TrK ∕ℚ(αnα2 ) ...  TrK ∕ℚ (αn αn)

In particular, ΔK is an integer.

Problem 50C (Root representation of ΔK). The discriminant of a quadratic polynomial Ax2 + Bx + C is defined as B2 4AC. More generally, the polynomial discriminant of a polynomial f [x] of degree n is

               ∏
Δ (f) := c2n−2       (zi − zj)2
             1≤i<j≤n

where z1,,zn are the roots of f, and c is the leading coefficient of f.

Suppose K is monogenic with 𝒪K = [𝜃]. Let f denote the minimal polynomial of 𝜃 (hence monic). Show that

ΔK  = Δ (f).

Problem 50D. Show that if Kis a number field then |ΔK | > 1.

Problem 50E (Brill’s theorem). For a number field K with signature (r1,r2), show that ΔK > 0 if and only if r2 is even.

Problem 50F (Stickelberger theorem).   PICPICPICLet K be a number field. Prove that

ΔK  ≡ 0 or 1 (mod  4).

51  Bonus: Let’s solve Pell’s equation!

This is an optional aside, and can be safely ignored. (On the other hand, it’s pretty short.)

51.1  Units

Prototypical example for this section: ±1, roots of unity, 3 2√ --
  2 and its powers.

Recall according to ??  that α ∈𝒪K is invertible if and only if

NK ∕ℚ (α ) = ±1.

We let 𝒪K× denote the set of units of 𝒪K.

Question 51.1.1. Show that 𝒪K× is a group under multiplication. Hence we name it the unit group of 𝒪K.

What are some examples of units?

Example 51.1.2 (Examples of units in a number field)

1.
±1 are certainly units, present in any number field.
2.
If 𝒪K contains a root of unity ω (i.e. ωn = 1), then ω is a unit. (In fact, ±1 are special cases of this.)
3.
Of course, not all units of 𝒪K are roots of unity. For example, if 𝒪K = [ --
√3] (from K = (√3--)) then the number 2 + √3-- is a unit, as its norm is
         √ --    2      2
NK ∕ℚ(2+   3) = 2 − 3 ⋅1 =  1.

Alternatively, just note that the inverse 2 √ --
  3 ∈𝒪K as well:

(    √ -) (   √ -)
  2−   3   2+   3  = 1.

Either way, 2 √3-- is a unit.

4.
Given any unit u ∈𝒪K×, all its powers are also units. So for example, (3 2√2--)n is always a unit of [√--
 2], for any n. If u is not a root of unity, then this generates infinitely many new units in 𝒪K×.

Question 51.1.3. Verify the claims above that

(a)
Roots of unity are units, and
(b)
Powers of units are units.

One can either proceed from the definition or use the characterization NK∕(α) = ±1. If one definition seems more natural to you, use the other.

51.2  Dirichlet’s unit theorem

Prototypical example for this section: The units of [√--
 3] are ±(2 + √ --
  3)n.

Definition 51.2.1. Let μ(𝒪K) denote the set of roots of unity contained in a number field K (equivalently, in 𝒪K).

Example 51.2.2 (Examples of μ(𝒪K))

(a)
If K = (i), then 𝒪K = [i]. So
μ(𝒪K ) = {±1, ±i}  where K = ℚ (i).
(b)
If K = (√ --
  3), then 𝒪K = [√ --
  3]. So
μ(𝒪  ) = {±1 }  where K =  ℚ(√3-).
    K
(c)
If K = (√ −-3), then 𝒪K = [1
2(1 + √ − 3)]. So
         {     ±1 ± √ − 3-}               √---
μ (𝒪K  ) =  ±1, ----------    where K = ℚ ( − 3)
                   2

where the ±’s in the second term need not depend on each other; in other words μ(𝒪K) = {         }
 z | z6 = 1.

Exercise 51.2.3. Show that we always have that μ(𝒪K) comprises the roots to xn1 for some integer n. (First, show it is a finite group under multiplication.)

We now quote, without proof, the so-called Dirichlet’s unit theorem, which gives us a much more complete picture of what the units in 𝒪K are. Legend says that Dirichlet found the proof of this theorem during an Easter concert in the Sistine Chapel.

Theorem 51.2.4 (Dirichlet’s unit theorem)
Let K be a number field with signature (r1,r2) and set

s = r1 + r2 − 1.

Then there exist units u1, …, us such that every unit α ∈𝒪K× can be written uniquely in the form

        n     n
α = ω ⋅u11...uss

for ω μ(𝒪K) is a root of unity, and n1,,ns .

More succinctly:

We have 𝒪K×∼=r1+r21 × μ(𝒪K).

A choice of u1, …, us is called a choice of fundamental units.

Here are some example applications.

Example 51.2.5 (Some unit groups)

(a)
Let K = (i) with signature (0,1). Then we obtain s = 0, so Dirichlet’s Unit theorem says that there are no units other than the roots of unity. Thus
𝒪 × = {±1, ±i}  where K  = ℚ(i).
  K

This is not surprising, since a + bi [i] is a unit if and only if a2 + b2 = 1.

(b)
Let K = (√ --
  3), which has signature (2,0). Then s = 1, so we expect exactly one fundamental unit. A fundamental unit is 2 + √ --
  3 (or 2 √--
 3, its inverse) with norm 1, and so we find
      {                  }
𝒪 × =  ± (2+ √3-)n | n ∈ ℤ .
  K
(c)
Let K = (√--
32) with signature (1,1). Then s = 1, so we expect exactly one fundamental unit. The choice 1 + √ --
 32 + √--
34. So
      {  (            )        }
𝒪× =   ±  1 + 3√2-+ √34--n | n ∈ ℤ .
 K

I haven’t actually shown you that these are fundamental units, and indeed computing fundamental units is in general hard.

51.3  Finding fundamental units

Here is a table with some fundamental units.

      d  Unit
------------√-------
  d = 2  1+ √ 2-
  d = 3  2+   3√--
  d = 5  12(1+√ -5)
  d = 6  5+ 2  6
  d = 7  8+ 3√7--
            √ ---
 d = 10  3+   10√--
 d = 11  10+ 3  11

In general, determining fundamental units is computationally hard.

However, once I tell you what the fundamental unit is, it’s not too bad (at least in the case s = 1) to verify it. For example, suppose we want to show that 10 + 3√---
 11 is a fundamental unit of K = (√1--1), which has ring of integers [√11--]. If not, then for some n > 1, we would have to have

      √---    (     √ --)n
10 + 3 11 = ±  x + y  11   .

For this to happen, at the very least we would need |y| < 3. We would also have x2 11y2 = ±1. So one can just verify (using y = 1,2) that this fails.

The point is that: Since (10,3) is the smallest (in the sense of |y |) integer solution to x2 11y2 = ±1, it must be the fundamental unit. This holds more generally, although in the case that d 1 (mod 4) a modification must be made as x, y might be half-integers (like 1
2(1 + √ --
  5)).

Theorem 51.3.1 (Fundamental units of pell equations)
Assume d is a squarefree integer.

(a)
If d 2,3 (mod 4), and (x,y) is a minimal integer solution to x2 dy2 = ±1, then x + y√d-- is a fundamental unit.
(b)
If d 1 (mod 4), and (x,y) is a minimal half-integer solution to x2 dy2 = ±1, then x + y√--
 d is a fundamental unit. (Equivalently, the minimal integer solution to a2 db2 = ±4 gives 1
2(a + b√ --
  d).)

(Any reasonable definition of “minimal” will work, such as sorting by |y|.)

51.4  Pell’s equation

This class of results completely eradicates Pell’s Equation. After all, solving

a2 − d ⋅b2 = ±1

amounts to finding elements of [√ --
  d] with norm ±1. It’s a bit weirder in the d 1 (mod 4) case, since in that case K = (√ --
  d) gives 𝒪K = [1
2(1 + √ --
  d)], and so the fundamental unit may not actually be a solution. (For example, when d = 5, we get the solution (1
2,1
2).) Nonetheless, all integer solutions are eventually generated.

To make this all concrete, here’s a simple example.

Example 51.4.1 (x2 5y2 = ±1)
Set K = (√5--), so 𝒪K = [1
2(1+√5--)]. By Dirichlet’s unit theorem, 𝒪K× is generated by a single element u. The choice

         √ --
u = 1+  1- 5
    2   2

serves as a fundamental unit, as there are no smaller integer solutions to a2 5b2 = ±4.

The first several powers of u are

---n------un-√-----Norm--
 − 2    12(3−   5)      1
 − 1    1(1− √ 5)    − 1
   0    2       1      1
        1    √ --
   1    21(1+ √ 5)    − 1
   2    2(3+  √5)-     1
   3       2+√ -5    − 1
   4   12(7+ 3√ 5)      1
   5  1(11+ 5  5)    − 1
   6  2   9+ 4√5--     1

One can see that the first integer solution is (2,1), which gives 1. The first solution with +1 is (9,4). Continuing the pattern, we find that every third power of u gives an integer solution (see also ?? ), with the odd ones giving a solution to x2 5y2 = 1 and the even ones a solution to x2 5y2 = +1. All solutions are generated this way, up to ± signs (by considering ±u±n).

51.5  A few harder problems to think about

Problem 51A (Fictitious account of the battle of Hastings). Determine the number of soldiers in the following battle:

The men of Harold stood well together, as their wont was, and formed thirteen squares, with a like number of men in every square thereof, and woe to the hardy Norman who ventured to enter their redoubts; for a single blow of Saxon war-hatched would break his lance and cut through his coat of mail . . . when Harold threw himself into the fray the Saxons were one might square of men, shouting the battle-cries, “Ut!”, “Olicrosse!”, “Godemite!”

Problem 51B. Let d > 0 be a squarefree integer, and let u denote the fundamental unit of (√ --
  d). Show that either u [√ --
  d], or un [√ --
  d] 3n.

Problem 51C. Show that there are no integer solutions to

x2 − 34y2 = − 1

despite the fact that 1 is a quadratic residue mod 34.

Part XIV
Algebraic NT II: Galois and Ramification Theory

52  Things Galois

52.1  Motivation

Prototypical example for this section: (√ --
  2) and (√3--
  2).

The key idea in Galois theory is that of embeddings, which give us another way to get at the idea of the “conjugate” we described earlier.

Let K be a number field. An embedding σ: K`→, is an injective field homomorphism: it needs to preserve addition and multiplication, and in particular it should fix 1.

Question 52.1.1. Show that in this context, σ(q) = q for any rational number q.

Example 52.1.2 (Examples of embeddings)

(a)
If K = (i), the two embeddings of K into are z↦→z (the identity) and z↦→z (complex conjugation).
(b)
If K = (√ --
  2), the two embeddings of K into are a+b√ --
  2 ↦→a+b√ --
  2 (the identity) and a + b√--
 2↦→a b√--
 2 (conjugation).
(c)
If K = (√3--
  2), there are three embeddings:
  • The identity embedding, which sends 1↦→1 and 3√ --
  2 ↦→√3--
  2.
  • An embedding which sends 1↦→1 and 3√2-↦→ω3√2-, where ω is a cube root of unity. Note that this is enough to determine the rest of the embedding.
  • An embedding which sends 1↦→1 and √--
32↦→ω2√--
32.

I want to make several observations about these embeddings, which will form the core ideas of Galois theory. Pay attention here!

In this chapter we’ll develop these ideas in full generality, for any field other than .

52.2  Field extensions, algebraic closures, and splitting fields

Prototypical example for this section: (√3--
  2)is an extension, is an algebraic closure of any number field.

First, we define a notion of one field sitting inside another, in order to generalize the notion of a number field.

Definition 52.2.1. Let K and F be fields. If F K, we write K∕F and say K is a field extension of F.

Thus K is automatically an F-vector space (just like (√--
 2) is automatically a -vector space). The degree is the dimension of this space, denoted [K : F]. If [K : F] is finite, we say K∕F is a finite (field) extension.

That’s really all. There’s nothing tricky at all.

Question 52.2.2. What do you call a finite extension of ?

Degrees of finite extensions are multiplicative.

Theorem 52.2.3 (Field extensions have multiplicative degree)
Let F K L be fields with L∕K, K∕F finite. Then

[L : K ][K : F ] = [L : F ].

Proof. Basis bash: you can find a basis of L over K, and then expand that into a basis L over F. (Diligent readers can fill in details.) □

Next, given a field (like (√--
32)) we want something to embed it into (in our case ). So we just want a field that contains all the roots of all the polynomials:

Theorem 52.2.4 (Algebraic closures)
Let F be a field. Then there exists a field extension F containing F, called an algebraic closure, such that all polynomials in F[x] factor completely.

Example 52.2.5 ()
is an algebraic closure of , and even itself.

Abuse of Notation 52.2.6. Some authors also require the algebraic closure to be minimal by inclusion: for example, given they would want only (the algebraic numbers). It’s a theorem that such a minimal algebraic closure is unique, and so these authors will refer to the algebraic closure of K.

I like , so I’ll use the looser definition.

52.3  Embeddings into algebraic closures for number fields

Now that I’ve defined all these ingredients, I can prove:

Theorem 52.3.1 (The n embeddings of a number field)
Let K be a number field of degree n. Then there are exactly n field homomorphisms K`→, say σ1,n which fix .

Remark 52.3.2 — Note that a nontrivial homomorphism of fields is necessarily injective (the kernel is an ideal). This justifies the use of “`→”, and we call each σi an embedding of K into .

Proof. This is actually kind of fun! Recall that any irreducible polynomial over has distinct roots (?? ). We’ll adjoin elements α12,m one at a time to , until we eventually get all of K, that is,

K = ℚ (α1,...,αn).

Diagrammatically, this is

SVG-Viewer needed.

First, we claim there are exactly

[ℚ (α1) : ℚ ]

ways to pick τ1. Observe that τ1 is determined by where it sends α1 (since it has to fix ). Letting p1 be the minimal polynomial of α1, we see that there are deg p1 choices for τ1, one for each (distinct) root of p1. That proves the claim.

Similarly, given a choice of τ1, there are

[ℚ(α1,α2 ) : ℚ (α1)]

ways to pick τ2. (It’s a little different: τ1 need not be the identity. But it’s still true that τ2 is determined by where it sends α2, and as before there are [(α12) : (α1)] possible ways.)

Multiplying these all together gives the desired [K : ]. □

Remark 52.3.3 — The primitive element theorem actually implies that m = 1 is sufficient; we don’t need to build a whole tower. This simplifies the proof somewhat.

It’s common to see expressions like “let K be a number field of degree n, and σ1,n its n embeddings” without further explanation. The relation between these embeddings and the Galois conjugates is given as follows.

Theorem 52.3.4 (Embeddings are evenly distributed over conjugates)
Let K be a number field of degree n with n embeddings σ1, …, σn, and let α K have m Galois conjugates over .

Then σj(α) is “evenly distributed” over each of these m conjugates: for any Galois conjugate β, exactly n-
m of the embeddings send α to β.

Proof. In the previous proof, adjoin α1 = α first. □

So, now we can define the trace and norm over in a nice way: given a number field K, we set

            n                          n
           ∑                          ∏
TrK∕ℚ(α) =    σi(α)  and   NK ∕ℚ(α) =    σi(α)
           i=1                        i=1

where σi are the n embeddings of K into .

52.4  Everyone hates characteristic 2: separable vs irreducible

Prototypical example for this section: has characteristic zero, hence irreducible polynomials are separable.

Now, we want a version of the above theorem for any field F. If you read the proof, you’ll see that the only thing that ever uses anything about the field is ?? , where we use the fact that

Irreducible polynomials over F have no double roots.

Let’s call a polynomial with no double roots separable; thus we want irreducible polynomials to be separable. We did this for in the last chapter by taking derivatives. Should work for any field, right?

Nope. Suppose we took the derivative of some polynomial like 2x3 + 24x + 9, namely 6x2 + 24. In it’s obvious that the derivative of a nonconstant polynomial fisn’t zero. But suppose we considered the above as a polynomial in 𝔽3, i.e. modulo 3. Then the derivative is zero. Oh, no!

We have to impose a condition that prevents something like this from happening.

Definition 52.4.1. For a field F, the characteristic of F is the smallest positive integer p such that,

1F + ⋅⋅⋅+  1F=  0
◟----◝◜----◞
   p times

or zero if no such integer p exists.

Example 52.4.2 (Field characteristics)
Old friends , , all have characteristic zero. But 𝔽p, the integers modulo p, is a field of characteristic p.

Exercise 52.4.3. Let F be a field of characteristic p. Show that if p > 0 then p is a prime number. (A proof is given next chapter.)

With the assumption of characteristic zero, our earlier proof works.

Lemma 52.4.4 (Separability in characteristic zero)
Any irreducible polynomial in a characteristic zero field is separable.

Unfortunately, this lemma is false if the “characteristic zero” condition is dropped.

Remark 52.4.5 — The reason it’s called separable is (I think) this picture: I have a polynomial and I want to break it into irreducible parts. Normally, if I have a double root in a polynomial, that means it’s not irreducible. But in characteristic p > 0 this fails. So inseparable polynomials are strange when you think about them: somehow you have double roots that can’t be separated from each other.

We can get this to work for any field extension in which separability is not an issue.

Definition 52.4.6. A separable extension K∕F is one in which every irreducible polynomial in F is separable (for example, if F has characteristic zero). A field F is perfect if any finite field extension K∕F is separable.

In fact, as we see in the next chapter:

Theorem 52.4.7 (Finite fields are perfect)
Suppose F is a field with finitely many elements. Then it is perfect.

Thus, we will almost never have to worry about separability since every field we see in the Napkin is either finite or characteristic 0. So the inclusion of the word “separable” is mostly a formality.

Proceeding onwards, we obtain

Theorem 52.4.8 (The n embeddings of any separable extension)
Let K∕F be a separable extension of degree n and let F be an algebraic closure of F. Then there are exactly n field homomorphisms K`→F, say σ1,n, which fix F.

In any case, this lets us define the trace for any separable normal extension.

Definition 52.4.9. Let K∕F be a separable extension of degree n, and let σ1, …, σn be the n embeddings into an algebraic closure of F. Then we define

             n                         n
            ∑                          ∏
TrK ∕F(α) =    σi(α)  and   NK ∕F(α) =    σi(α ).
            i=1                         i=1

When F = and the algebraic closure is , this coincides with our earlier definition!

52.5  Automorphism groups and Galois extensions

Prototypical example for this section: (√ --
  2) is Galois but (√3--
  2) is not.

We now want to get back at the idea we stated at the beginning of this section that (3√2-) is deficient in a way that (√ --
  2) is not.

First, we define the “internal” automorphisms.

Definition 52.5.1. Suppose K∕F is a finite extension. Then Aut(K∕F) is the set of field isomorphisms σ : K K which fix F. In symbols

Aut(K ∕F ) = {σ : K → K | σ is identity on F} .

This is a group under function composition!

Note that this time, we have a condition that F is fixed by σ. (This was not there before when we considered F = , because we got it for free.)

Example 52.5.2 (Old examples of automorphism groups)
Reprising the example at the beginning of the chapter in the new notation, we have:

(a)
Aut((i))∼=2, with elements z↦→z and z↦→z.
(b)
Aut((√--
 2))∼=2in the same way.
(c)
Aut((√ --
3 2)) is the trivial group, with only the identity embedding!

Example 52.5.3 (Automorphism group of (√2--,√3-))
Here’s a new example: let K = (√--
 2,√ --
  3). It turns out that Aut(K∕) = {1,σ,τ,στ}, where

   {  --       --          {   --     --
    √ 2  ↦→  − √ 2            √ 2  ↦→ √ 2
σ : √ --    √ --   and  τ :  √ --     √ --
      3  ↦→    3                3  ↦→ −   3.

In other words, Aut(K∕) is the Klein Four Group.

First, let’s repeat the proof of the observation that these embeddings shuffle around roots (akin to the first observation in the introduction):

Lemma 52.5.4 (Root shuffling in Aut(K∕F))
Let f F[x], suppose K∕F is a finite extension, and assume α K is a root of f. Then for any σ Aut(K∕F), σ(α) is also a root of f.

Proof. Let f(x) = cnxn + cn1xn1 + ⋅⋅⋅ + c0, where ci F. Thus,

                    n                   n                            -|
0 = σ(f (α )) = σ (cnα + ⋅⋅⋅+ c0) = cnσ(α)  + ⋅⋅⋅+ c0 = f (σ (α )).       --

In particular, taking f to be the minimal polynomial of α we deduce

An embedding σ Aut(K∕F) sends an α K to one of its various Galois conjugates (over F).

Next, let’s look again at the “deficiency” of certain fields. Look at K = (√ --
 32). So, again K∕is deficient for two reasons. First, while there are three maps (√ --
3 2)`→, only one of them lives in Aut(K∕), namely the identity. In other words, |Aut (K ∕ℚ )| is too small. Secondly, K is missing some Galois conjugates (ω3√2- and ω23√2-).

The way to capture the fact that there are missing Galois conjugates is the notion of a splitting field.

Definition 52.5.5. Let F be a field and p(x) F[x] a polynomial of degree n. Then p(x) has roots α1,n in an algebraic closure of F. The splitting field of F is defined as F(α1,n).

In other words, the splitting field is the smallest field in which p(x) splits.

Example 52.5.6 (Examples of splitting fields)

(a)
The splitting field of x2 5 over is (√ --
  5). This is a degree 2 extension.
(b)
The splitting field of x2 +x+1 over is (ω), where ω is a cube root of unity. This is a degree 3 extension.
(c)
The splitting field of x2 + 3x + 2 = (x + 1)(x + 2) is just ! There’s nothing to do.

Example 52.5.7 (Splitting fields: a cautionary tale)
The splitting field of x3 2 over is in fact

   √ --
ℚ ( 3 2,ω)

and not just (√ --
3 2)! One must really adjoin all the roots, and it’s not necessarily the case that these roots will generate each other.

To be clear:

Note that in particular, the splitting field of x3 2 over is degree six, not just degree three.

In general, the splitting field of a polynomial can be an extension of degree up to n!. The reason is that if p(x) has n roots and none of them are “related” to each other, then any permutation of the roots will work.

Now, we obtain:

Theorem 52.5.8 (Galois extensions are splitting)
For finite extensions K∕F, |Aut(K ∕F )| divides [K : F], with equality if and only if K is the splitting field of some separable polynomial with coefficients in F.

The proof of this is deferred to an optional section at the end of the chapter. If K∕F is a finite extension and |Aut(K ∕F )| = [K : F], we say the extension K∕F is Galois. In that case, we denote Aut(K∕F) by Gal(K∕F) instead and call this the Galois group of K∕F.

Example 52.5.9 (Examples and non-examples of Galois extensions)

(a)
The extension (√ --
  2)is Galois, since it’s the splitting field of x2 2 over . The Galois group has order two, √ --
  2 ↦→±√ --
  2.
(b)
The extension (√ --
  2,√ --
  3)is Galois, since it’s the splitting field of (x2 5)2 6 over . As discussed before, the Galois group is 2× 2.
(c)
The extension (√ --
3 2)is not Galois.

To explore (√3--
  2) one last time:

Example 52.5.10 (Galois closures, and the automorphism group of (  --
√32))
Let’s return to the field K = (3√2--), which is a field with [K : ] = 6. Consider the two automorphisms:

   {  --       --          {   --     --
    √32  ↦→  ω√32             3√ 2  ↦→  3√2
σ :                and  τ :           2
    ω    ↦→  ω                ω    ↦→  ω .

Notice that σ3 = τ2 = id. From this one can see that the automorphism group of K must have order 6 (it certainly has order 6; now use Lagrange’s theorem). So, K∕is Galois! Actually one can check explicitly that

Gal(K∕ℚ ) ∼= S3

is the symmetric group on 3 elements, with order 3! = 6.

This example illustrates the fact that given a non-Galois field extension, one can “add in” missing conjugates to make it Galois. This is called taking a Galois closure.

52.6  Fundamental theorem of Galois theory

After all this stuff about Galois Theory, I might as well tell you the fundamental theorem, though I won’t prove it. Basically, it says that if K∕F is Galois with Galois group G, then:

Subgroups of G correspond exactly to fields E with F E K.

To tell you how the bijection goes, I have to define a fixed field.

Definition 52.6.1. Let K be a field and H a subgroup of Aut(K∕F). We define the fixed field of H, denoted KH, as

KH  := {x ∈ K : σ (x) = x ∀σ ∈ H }.

Question 52.6.2. Verify quickly that KH is actually a field.

Now let’s look at examples again. Consider K = (√--
 2,√ --
  3), where

G = Gal(K ∕ℚ ) = {id,σ,τ,στ}

is the Klein four group (where σ(√ --
  2) = √ --
  2 but σ(√--
 3) = √ --
  3; τ goes the other way).

Question 52.6.3. Let H = {id}. What is KH?

In that case, the diagram of fields between and K matches exactly with the subgroups of G, as follows:

SVG-Viewer needed.

SVG-Viewer needed.

We see that subgroups correspond to fixed fields. That, and much more, holds in general.

Theorem 52.6.4 (Fundamental theorem of Galois theory)
Let K∕F be a Galois extension with Galois group G = Gal(K∕F).

(a)
There is a bijection between field towers F E K and subgroups H G:
(     )       (    )
|  K  |       |  1 |
|||{   | |||}       |||{  | |||}
   E     ⇐⇒      H
||     ||       ||    ||
||(   | ||)       ||(  | ||)
   F             G

The bijection sends H to its fixed field KH, and hence is inclusion reversing.

(b)
Under this bijection, we have [K : E] = |H| and [E : F] = [G : H].
(c)
K∕E is always Galois, and its Galois group is Gal(K∕E) = H.
(d)
E∕F is Galois if and only if H is normal in G. If so, Gal(E∕F) = G∕H.

Exercise 52.6.5. Suppose we apply this theorem for

       √ -
K  = ℚ( 3 2,ω ).

Verify that the fact E = (√-
32) is not Galois corresponds to the fact that S3 does not have normal subgroups of order 2.

52.7  A few harder problems to think about

Problem 52A (Galois group of the cyclotomic field). Let p be an odd rational prime and ζp a primitive pth root of unity. Let K = (ζp). Show that

          ∼       ∗
Gal(K∕ ℚ) = (ℤ∕pℤ) .

Problem 52B (Greek constructions). Prove that the three Greek constructions

(a)
doubling the cube,
(b)
squaring the circle, and
(c)
trisecting an angle

are all impossible. (Assume π is transcendental.)

Problem 52C (China Hong Kong Math Olympiad).     PICPICProve that there are no rational numbers p, q, r satisfying

   (   )
     2π-       √ -- √3-
cos  7   = p +   q +  r.

Problem 52D. Show that the only automorphism of is the identity. Hence Aut() is the trivial group.

Problem 52E (Artin’s primitive element theorem).     PICPICLet K be a number field. Show that K∼=(γ) for some γ.

52.8  (Optional) Proof that Galois extensions are splitting

We prove ?? . First, we extract a useful fragment from the fundamental theorem.

Theorem 52.8.1 (Fixed field theorem)
Let K be a field and G a subgroup of Aut(K). Then [K : KG] = |G |.

The inequality itself is not difficult:

Exercise 52.8.2. Show that [K : F] ≥|Aut(K∕F)|, and that equality holds if and only if the set of elements fixed by all σ Aut(K∕F) is exactly F. (Use ?? .)

The equality case is trickier.

The easier direction is when K is a splitting field. Assume K = F(α1,n) is the splitting field of some separable polynomial p F[x] with n distinct roots α1,n. Adjoin them one by one:

SVG-Viewer needed.

(Does this diagram look familiar?) Every map K K which fixes F corresponds to an above commutative diagram. As before, there are exactly [F(α1) : F] ways to pick τ1. (You need the fact that the minimal polynomial p1 of α1 is separable for this: there need to be exactly deg p1 = [F(α1) : F] distinct roots to nail p1 into.) Similarly, given a choice of τ1, there are [F(α12) : F(α1)] ways to pick τ2. Multiplying these all together gives the desired [K : F].

Now assume K∕F is Galois. First, we state:

Lemma 52.8.3
Let K∕F be Galois, and p F[x] irreducible. If any root of p (in F) lies in K, then all of them do, and in fact p is separable.

Proof. Let α K be the prescribed root. Consider the set

S =  {σ(α) | σ ∈ Gal(K∕F )}.

(Note that α S since Gal(K∕F) id.) By construction, any τ Gal(K∕F) fixes S. So if we construct

       ∏
˜p(x) =    (x− β ),
       β∈S

then by Vieta’s Formulas, we find that all the coefficients of p are fixed by elements of σ. By the equality case we specified in the exercise, it follows that p has coefficients in F! (This is where we use the condition.) Also, by ?? , p divides p.

Yet p was irreducible, so it is the minimal polynomial of α in F[x], and therefore we must have that p divides p. Hence p = p. Since p was built to be separable, so is p. □

Now we’re basically done – pick a basis ω1, …, ωn of K∕F, and let pi be their minimal polynomials; by the above, we don’t get any roots outside K. Consider P = p1pn, removing any repeated factors. The roots of P are ω1, …, ωn and some other guys in K. So K is the splitting field of P.

53  Finite fields

In this short chapter, we classify all fields with finitely many elements and compute the Galois groups. Nothing in here is very hard, and so most of the proofs are just sketches; if you like, you should check the details yourself.

The whole point of this chapter is to prove:

If you’re in a hurry you can just remember these results and skip to the next chapter.

53.1  Example of a finite field

Before diving in, we give some examples.

Recall that the characteristic of a field F is the smallest positive integer p such that

1F + ⋅⋅⋅+  1F=  0
◟----◝◜----◞
   p times

or 0 if no such integer p exists.

Example 53.1.1 (Base field)
Let 𝔽p denote the field of integers modulo p. This is a field with p elements, with characteristic p.

Example 53.1.2 (The finite field of nine elements)
Let

F ∼=  𝔽3[X ]∕(X2 + 1) ∼= ℤ[i]∕(3).

We can think of its elements as

{a + bi | 0 ≤ a,b ≤ 2}.

Since (3) is prime in [i], the ring of integers of (i), we see F is a field with 32 = 9 elements inside it. Note that, although this field has 9 elements, every element x has the property that

3x = x + ⋅⋅⋅+ x = 0.
     ◟---◝◜---◞
       3 times

In particular, F has characteristic 3.

53.2  Finite fields have prime power order

Lemma 53.2.1
If the characteristic of a field F isn’t zero, it must be a prime number.

Proof. Assume not, so n = ab for a,b < n. Then let

A  = 1F + ⋅⋅⋅+ 1F ⁄= 0
     ◟--a-◝ti◜mes--◞

and

B =  1F + ⋅⋅⋅+ 1F ⁄= 0.
     ◟--b t◝im◜es--◞

Then AB = 0, contradicting the fact that F is a field. □

We like fields of characteristic zero, but unfortunately for finite fields we are doomed to have nonzero characteristic.

Lemma 53.2.2 (Finite fields have prime power orders)
Let F be a finite field. Then

(a)
Its characteristic is nonzero, and hence some prime p.
(b)
The field F is a finite extension of 𝔽p, and in particular it is an 𝔽p-vector space.
(c)
We have |F | = pn for some prime p, integer n.

Proof. Very briefly, since this is easy:

(a)
Apply Lagrange’s theorem (or pigeonhole principle!) to (F,+) to get the characteristic isn’t zero.
(b)
The additive subgroup of (F,+) generated by 1F is an isomorphic copy of 𝔽p.
(c)
Since it’s a field extension, F is a finite-dimensional vector space over 𝔽p, with some basis e1,,en. It follows that there are pn elements of F. □

Remark 53.2.3 — An amusing alternate proof of (c) by contradiction: if a prime qp divides |F |, then by Cauchy’s theorem (?? ) on (F,+) there’s a (nonzero) element x of order q. Evidently

x ⋅(1  + ⋅⋅⋅+ 1 ) = 0
    ◟F---◝◜----F◞
        q times

then, but x0, and hence the characteristic of F also divides q, which is impossible.

An important point in the above proof is that

Lemma 53.2.4 (Finite fields are field extensions of 𝔽p)
If |F | = pn is a finite field, then there is an isomorphic copy of 𝔽p sitting inside F. Thus F is a field extension of 𝔽p.

We want to refer a lot to this copy of 𝔽p, so in what follows:

Abuse of Notation 53.2.5. Every integer n can be identified as an element of F, namely

n := 1◟F-+-⋅◝⋅◜⋅+-1F◞.
        n times

Note that (as expected) this depends only on n (mod p).

This notation makes it easier to think about statements like the following.

Theorem 53.2.6 (Freshman’s dream)
For any a,b F we have

(a + b)p = ap + bp.

Proof. Use the Binomial theorem, and the fact that (p)
 i is divisible by p for 0 < i < p. □

Exercise 53.2.7. Convince yourself that this proof works.

53.3  All finite fields are isomorphic

We next proceed to prove “Fermat’s little theorem”:

Theorem 53.3.1 (Fermat’s little theorem in finite fields)
Let F be a finite field of order pn. Then every element x F satisfies

 pn
x   − x = 0.

Proof. If x = 0 it’s true; otherwise, use Lagrange’s theorem on the abelian group (F,×) to get xpn1 = 1F . □

We can now prove the following result, which is the “main surprise” about finite fields: that there is a unique one up to isomorphism for each size.

Theorem 53.3.2 (Complete classification of finite fields)
A field F is a finite field with pn elements if and only if it is a splitting field of xpn x over 𝔽p.

Proof. By “Fermat’s little theorem”, all the elements of F satisfy this polynomial. So we just have to show that the roots of this polynomial are distinct (i.e. that it is separable).

To do this, we use the derivative trick again: the derivative of this polynomial is

pn ⋅xpn− 1 − 1 = − 1

which has no roots at all, so the polynomial cannot have any double roots. □

Definition 53.3.3. For this reason, it’s customary to denote the field with pn elements by 𝔽pn.

Note that the polynomial xpn x (mod p) is far from irreducible, but the computation above shows that it’s separable.

Example 53.3.4 (The finite field of order nine again)
The polynomial x9 x is separable modulo 3 and has factorization

x (x + 1)(x+  2)(x2 + 1)(x2 + x + 2)(x2 + 2x + 2) (mod 3 ).

So if F has order 9, then we intuitively expect it to be the field generated by adjoining all the roots: 0, 1, 2, as well as ±i, 1 ±i, 2 ±i. Indeed, that’s the example we had at the beginning of this chapter.

(Here i denotes an element of 𝔽9 satisfying i2 = 1. The notation is deliberately similar to the usual imaginary unit.)

53.4  The Galois theory of finite fields

Retain the notation 𝔽pn now (instead of F like before). By the above theorem, it’s the splitting field of a separable polynomial, hence we know that 𝔽pn𝔽p is a Galois extension. We would like to find the Galois group.

In fact, we are very lucky: it is cyclic. First, we exhibit one such element σp Gal(𝔽pn𝔽p):

Theorem 53.4.1 (The pth power automorphism)
The map σp : 𝔽pn 𝔽pn defined by

σp(x) = xp

is an automorphism, and moreover fixes 𝔽p.

Proof. It’s a homomorphism since it fixes 1, respects multiplication, and respects addition.

Question 53.4.2. Why does it respect addition?

Next, we claim that it is injective. To see this, note that

  p   p        p   p                p
x  = y  ⇐ ⇒  x  − y  = 0 ⇐ ⇒  (x−  y) = 0 ⇐ ⇒  x = y.

Here we have again used the Freshman’s Dream. Since 𝔽pn is finite, this injective map is automatically bijective. The fact that it fixes 𝔽p is Fermat’s little theorem. □

Now we’re done:

Theorem 53.4.3 (Galois group of the extension 𝔽pn𝔽p)
We have Gal(𝔽pn𝔽p)∼=∕n with generator σp.

Proof. Since [𝔽pn : 𝔽p] = n, the Galois group G has order n. So we just need to show σp G has order n.

Note that σp applied k times gives x↦→xpk . Hence, σp applied n times is the identity, as all elements of 𝔽pn satisfy xpn = x. But if k < n, then σp applied k times cannot be the identity or xpk x would have too many roots. □

We can see an example of this again with the finite field of order 9.

Example 53.4.4 (Galois group of finite field of order 9)
Let 𝔽9 be the finite field of order 9, and represent it concretely by 𝔽9 = [i](3). Let σ3 : 𝔽9 𝔽9 be x↦→x3. We can witness the fate of all nine elements:

SVG-Viewer needed.

(As claimed, 0, 1, 2 are the fixed points, so I haven’t drawn arrows for them.) As predicted, the Galois group has order two:

Gal(𝔽9∕𝔽3) = {id,σ3} ∼= ℤ ∕2ℤ.

This concludes the proof of all results stated at the beginning of this chapter.

53.5  A few harder problems to think about

Problem 53A (HMMT 2017).       PICWhat is the period of the Fibonacci sequence modulo 127?

54  Ramification theory

We’re very interested in how rational primes p factor in a bigger number field K. Some examples of this behavior: in [i] (which is a UFD!), we have factorizations

(2) = (1 + i)2
(3) = (3)
(5) = (2 + i)(2 i).

In this chapter we’ll learn more about how primes break down when they’re thrown into bigger number fields. Using weapons from Galois Theory, this will culminate in a proof of Quadratic Reciprocity.

54.1  Ramified / inert / split primes

Prototypical example for this section: In [i], 2 is ramified, 3 is inert, and 5 splits.

Let p be a rational prime, and toss it into 𝒪K. Thus we get a factorization into prime ideals

p⋅𝒪K  = 𝔭e1...𝔭egg.
         1

We say that each 𝔭i is above (p).1 Pictorially, you might draw this as follows:

SVG-Viewer needed.

Some names for various behavior that can happen:

Question 54.1.1. More generally, for a prime p in [i]:

Prove this.

54.2  Primes ramify if and only if they divide ΔK

The most unusual case is ramification: Just like we don’t expect a randomly selected polynomial to have a double root, we don’t expect a randomly selected prime to be ramified. In fact, the key to understanding ramification is the discriminant.

For the sake of discussion, let’s suppose that K is monogenic, 𝒪K = [𝜃], where 𝜃 has minimal polynomial f. Let p be a rational prime we’d like to factor. If f factors as f1e1fgeg, then we know that the prime factorization of (p) is given by

        ∏          e
p⋅𝒪K  =    (p,fi(𝜃)) i .
         i

In particular, p ramifies exactly when f has a double root mod p! To detect whether this happens, we look at the polynomial discriminant of f, namely

        ∏
Δ (f) =   (zi − zj)2
        i<j

and see whether it is zero mod p – thus p ramifies if and only if this is true.

It turns out that the naïve generalization to any number field works if we replace Δ(f) by just the discriminant ΔK of K; (these are the same for monogenic 𝒪K by ?? ). That is,

Theorem 54.2.1 (Discriminant detects ramification)
Let p be a rational prime and K a number field. Then p is ramified if and only if p divides ΔK.

Example 54.2.2 (Ramification in the Gaussian integers)
Let K = (i) so 𝒪K = [i] and ΔK = 4. As predicted, the only prime ramifying in [i] is 2, the only prime factor of ΔK.

In particular, only finitely many primes ramify.

54.3  Inertial degrees

Prototypical example for this section: (7) has inertial degree 2 in [i] and (2 + i) has inertial degree 1 in [i].

Recall that we were able to define an ideal norm N(𝔞) = |𝒪K ∕𝔞| measuring how “roomy” the ideal 𝔞 is. For example, (5) has ideal norm 52 = 25 in [i], since

ℤ[i]∕(5) ∼= {a+  bi | a,b ∈ ℤ ∕5ℤ}

has 52 = 25 elements.

Now, let’s look at

                e
p ⋅𝒪K =  𝔭e11 ...𝔭 gg

in 𝒪K, where K has degree n. Taking the ideal norms of both sides, we have that

pn = N (𝔭1)e1 ...N (𝔭g)eg.

We conclude that 𝔭i = pfi for some integer fi 1, and moreover that

    ∑g
n =    eifi.
    i=1

Definition 54.3.1. We say fi is the inertial degree of 𝔭i, and ei is the ramification index.

Example 54.3.2 (Examples of inertial degrees)
Work in [i], which is degree 2. The inertial degree detects how “spacy” the given 𝔭 is when interpreted in 𝒪K.

(a)
The prime 7 [i] has inertial degree 2. Indeed, [i](7) has 72 = 49 elements, those of the form a + bi for a, b modulo 7. It gives “two degrees” of space.
(b)
Let (5) = (2+i)(2i). The inertial degrees of (2+i) and (2i) are both 1. Indeed, [i](2+i) only gives “one degree” of space, since each of its elements can be viewed as integers modulo 5, and there are only 51 = 5 elements.

If you understand this, it should be intuitively clear why the sum of eifi should equal n.

54.4  The magic of Galois extensions

OK, that’s all fine and well. But something really magical happens when we add the additional hypothesis that K∕is Galois: all the inertial degrees and ramification degrees are equal. We set about proving this.

Let K∕be Galois with G = Gal(K∕). Note that if 𝔭 ⊆𝒪K is a prime above p, then the image σimg(𝔭) is also prime for any σ G (since σ is an automorphism!). Moreover, since p 𝔭 and σ fixes , we know that p σimg(𝔭) as well.

Thus, by the pointwise mapping, the Galois group acts on the prime ideals above a rational prime p. Picture:

The notation σimg(𝔭) is hideous in this context, since we’re really thinking of σ as just doing a group action, and so we give the shorthand:

Abuse of Notation 54.4.1. Let σ𝔭 be shorthand for σimg(𝔭).

Since the σ’s are all bijections (they are automorphisms!), it should come as no surprise that the prime ideals which are in the same orbit are closely related. But miraculously, it turns out there is only one orbit!

Theorem 54.4.2 (Galois group acts transitively)
Let K∕be Galois with G = Gal(K∕). Let {𝔭i} be the set of distinct prime ideals in the factorization of p ⋅𝒪K (in 𝒪K).

Then G acts transitively on the 𝔭i: for every i and j, we can find σ such that σ𝔭i = 𝔭j.

Proof. Fairly slick. Suppose for contradiction that no σ G sends 𝔭1 to 𝔭2, say. By the Chinese remainder theorem, we can find an x ∈𝒪K such that

x 0 (mod 𝔭1)
x 1 (mod 𝔭i) for i 2

Then, compute the norm

              ∏
NK ∕ℚ(x) =          σ (x).
           σ∈Gal(K∕ℚ)

Each σ(x) is in K because K∕is Galois!

Since NK∕(x) is an integer and divisible by 𝔭1, we should have that NK∕(x) is divisible by p. Thus it should be divisible by 𝔭2 as well. But by the way we selected x, we have x∕∈σ1𝔭2 for every σ G! So σ(x)∕∈𝔭2 for any σ, which is a contradiction. □

Theorem 54.4.3 (Inertial degree and ramification indices are all equal)
Assume K∕is Galois. Then for any rational prime p we have

p⋅𝒪K  = (𝔭1𝔭2...𝔭g)e

for some e, where the 𝔭i are distinct prime ideals with the same inertial degree f. Hence

[K : ℚ ] = ef g.

Proof. To see that the inertial degrees are equal, note that each σ induces an isomorphism

𝒪  ∕𝔭 ∼ 𝒪   ∕σ(𝔭).
  K   =   K

Because the action is transitive, all fi are equal.

Exercise 54.4.4. Using the fact that σ Gal(K∕), show that

 img             img
σ  (p⋅𝒪K ) = p⋅σ   (𝒪K ) = p⋅𝒪K.

So for every σ, we have that p⋅𝒪K = 𝔭iei = (σ𝔭i)ei. Since the action is transitive, all ei are equal. □

Let’s see an illustration of this.

Example 54.4.5 (Factoring 5 in a Galois/non-Galois extension)
Let p = 5 be a prime.

(a)
Let E = (3√2-). One can show that 𝒪E = [3√2-], so we use the Factoring Algorithm on the minimal polynomial x3 2. Since x3 2 (x 3)(x2 + 3x + 9) (mod 5) is the irreducible factorization, we have that
        √ --      √ --   √--
(5) = (5, 3 2− 3)(5, 34+ 3 32 + 9)

which have inertial degrees 1 and 2, respectively. The fact that this is not uniform reflects that E is not Galois.

(b)
Now let K = (3√--
 2), which is the splitting field of x32 over ; now K is Galois. It turns out that
𝒪  =  ℤ[𝜀] where   𝜀 is a root of t6 + 3t5 − 5t3 + 3t+ 1.
 K

(this takes a lot of work to obtain, so we won’t do it here). Modulo 5 this has an irreducible factorization (x2 + x + 2)(x2 + 3x + 3)(x2 + 4x + 1) (mod 5), so by the Factorization Algorithm,

         2            2            2
(5) = (5,𝜀 + 𝜀 + 2)(5,𝜀 + 3𝜀+ 3)(5,𝜀  + 4𝜀+ 1).

This time all inertial degrees are 2, as the theorem predicts for K Galois.

54.5  (Optional) Decomposition and inertia groups

Let p be a rational prime. Thus

p⋅𝒪K  = (𝔭1...𝔭g)e

and all the 𝔭i have inertial degree f. Let 𝔭 denote a choice of the 𝔭i.

We can look at both the fields 𝒪K𝔭 and ∕p = 𝔽p. Naturally, since 𝒪K𝔭 is a finite field we can view it as a field extension of 𝔽p. So we can get the diagram

SVG-Viewer needed.

At the far right we have finite field extensions, which we know are really well behaved. So we ask:

How are Gal((𝒪   ∕𝔭)∕ 𝔽 )
   K      p and Gal(K∕) related?

Absurdly enough, there is an explicit answer: it’s just the stabilizer of 𝔭, at least when p is unramified.

Definition 54.5.1. Let D𝔭 Gal(K∕) be the stabilizer of 𝔭, that is

D𝔭 :=  {σ ∈ Gal (K ∕ℚ ) | σ 𝔭 = 𝔭}.

We say D𝔭 is the decomposition group of 𝔭.

Then, every σ D𝔭 induces an automorphism of 𝒪K𝔭 by

α ↦→  σ(α)  (mod  𝔭).

So there’s a natural map

D 𝔭−→𝜃 Gal((𝒪K ∕𝔭)∕𝔽p)

by declaring 𝜃(σ) to just be “σ (mod 𝔭)”. The fact that σ D𝔭 (i.e. σ fixes 𝔭) ensures this map is well-defined.

Theorem 54.5.2 (Decomposition group and Galois group)
Define 𝜃 as above. Then

In particular, if p is unramified then D𝔭∼
=Gal((𝒪K ∕𝔭)∕𝔽p).

(The proof is not hard, but a bit lengthy and in my opinion not very enlightening.)

If p is unramified, then taking modulo 𝔭 gives D𝔭∼=Gal((𝒪K ∕𝔭)∕𝔽p).

But we know exactly what Gal((𝒪K ∕𝔭)∕𝔽p) is! We already have 𝒪K𝔭∼
=𝔽pf, and the Galois group is

                ∼     (      ) ∼       p  ∼
Gal ((𝒪K ∕𝔭 )∕𝔽p )= Gal  𝔽pf∕𝔽p  = ⟨x ↦→ x ⟩ = ℤ∕fℤ.

So

    ∼
D 𝔭 = ℤ∕fℤ

as well.

Let’s now go back to

D  −→𝜃 Gal ((𝒪   ∕𝔭) : 𝔽 ) .
  𝔭          K      p

The kernel of 𝜃 is called the inertia group and denoted I𝔭 D𝔭; it has order e.

This gives us a pretty cool sequence of subgroups {1}⊆ I D G where G is the Galois group (I’m dropping the 𝔭-subscripts now). Let’s look at the corresponding fixed fields via the Fundamental theorem of Galois theory. Picture:

SVG-Viewer needed.

Something curious happens:

Picture: In other words, the process of going from 1 to efg can be very nicely broken into the three steps above. To draw this in the picture, we get

SVG-Viewer needed.

In any case, in the “typical” case that there is no ramification, we just have KI = K.

54.6  Tangential remark: more general Galois extensions

All the discussion about Galois extensions carries over if we replace K∕by some different Galois extension K∕F. Instead of a rational prime p breaking down in 𝒪K, we would have a prime ideal 𝔭 of F breaking down as

                  e
𝔭 ⋅𝒪L = (𝔓1 ...𝔓g )

in 𝒪L and then all results hold verbatim. (The 𝔓i are primes in L above 𝔭.) Instead of 𝔽p we would have 𝒪F 𝔭.

The reason I choose to work with F = is that capital Gothic P’s (𝔓) look really terrifying.

54.7  A few harder problems to think about

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

Problem 54A. Prove that no rational prime p can remain inert in K = (3√--
 2), the splitting field of x3 2. How does this generalize?

55  The Frobenius element

Throughout this chapter K∕is a Galois extension with Galois group G, p is an unramified rational prime in K, and 𝔭 is a prime above it. Picture:

SVG-Viewer needed.

If p is unramified, then one can show there is a unique σ Gal(L∕K) such that σ(α) αp (mod 𝔭) for every prime p.

55.1  Frobenius elements

Prototypical example for this section: Frob𝔭 in [i] depends on p (mod 4).

Here is the theorem statement again:

Theorem 55.1.1 (The Frobenius element)
Assume K∕is Galois with Galois group G. Let p be a rational prime unramified in K, and 𝔭 a prime above it. There is a unique element Frob𝔭 G with the property that

            p
Frob𝔭(α) ≡ α   (mod  𝔭).

It is called the Frobenius element at 𝔭, and has order f.

The uniqueness part is pretty important: it allows us to show that a given σ Gal(L∕K) is the Frobenius element by just observing that it satisfies the above functional equation.

Let’s see an example of this:

Example 55.1.2 (Frobenius elements of the Gaussian integers)
Let’s actually compute some Frobenius elements for K = (i), which has 𝒪K = [i]. This is a Galois extension, with G = (2)×, corresponding to the identity and complex conjugation.

If p is an odd prime with 𝔭 above it, then Frob𝔭 is the unique element such that

       p
(a + bi) ≡ Frob𝔭(a + bi)  (mod  𝔭)

in [i]. In particular,

              {
                i   p ≡ 1  (mod  4)
Frob𝔭(i) = ip =
                − i p ≡ 3  (mod  4).

From this we see that Frob𝔭 is the identity when p 1 (mod 4) and Frob𝔭 is complex conjugation when p 3 (mod 4).

Note that we really only needed to compute Frob𝔭 on i. If this seems too good to be true, a philosophical reason is “freshman’s dream” where (x + y)p xp + yp (mod p) (and hence mod 𝔭). So if σ satisfies the functional equation on generators, it satisfies the functional equation everywhere.

We also have an important lemma:

Lemma 55.1.3 (Order of the Frobenius element)
Let Frob𝔭 be a Frobenius element from an extension K∕. Then the order of 𝔭 is equal to the inertial degree f𝔭. In particular, (p) splits completely in 𝒪K if and only if Frob𝔭 = id.

Exercise 55.1.4. Prove this lemma as by using the fact that 𝒪K𝔭 is the finite field of order f𝔭, and the Frobenius element is just x↦→xp on this field.

Let us now prove the main theorem. This will only make sense in the context of decomposition groups, so readers which skipped that part should omit this proof.

Proof of existence of Frobenius element. The entire theorem is just a rephrasing of the fact that the map 𝜃 defined in the last section is an isomorphism when p is unramified. Picture:

In here we can restrict our attention to D𝔭 since we need to have σ(α) 0 (mod 𝔭) when α 0 (mod 𝔭). Thus we have the isomorphism

   𝜃
D𝔭 −→ Gal ((𝒪K ∕𝔭)∕𝔽p).

But we already know Gal((𝒪K ∕𝔭)∕𝔽p), according to the string of isomorphisms

                ∼     (      ) ∼           p  ∼
Gal((𝒪K ∕𝔭)∕𝔽p) = Gal  𝔽pf∕𝔽p =  ⟨T  = x ↦→ x ⟩ = ℤ∕f ℤ.

So the unique such element is the pre-image of T under 𝜃. □

55.2  Conjugacy classes

Now suppose 𝔭1 and 𝔭2 are two primes above an unramified rational prime p. Then we can define Frob𝔭1 and Frob𝔭2. Since the Galois group acts transitively, we can select σ Gal(K∕) be such that

σ(𝔭1) = 𝔭2.

We claim that

                     −1
Frob𝔭2 = σ ∘ Frob𝔭1 ∘ σ .

Note that this is an equation in G.

Question 55.2.1. Prove this.

More generally, for a given unramified rational prime p, we obtain:

Theorem 55.2.2 (Conjugacy classes in Galois groups)
The set

{Frob𝔭 | 𝔭 above p}

is one of the conjugacy classes of G.

Proof. We’ve used the fact that G = Gal(K∕) is transitive to show that Frob𝔭1 and Frob𝔭2 are conjugate if they both lie above p; hence it’s contained in some conjugacy class. So it remains to check that for any 𝔭, σ, we have σFrob𝔭 σ1 = Frob𝔭 for some 𝔭. For this, just take 𝔭= σ𝔭. Hence the set is indeed a conjugacy class. □

In summary,

Frob𝔭 is determined up to conjugation by the prime p from which 𝔭 arises.

So even though the Gothic letters look scary, the content of Frob𝔭 really just comes from the more friendly-looking rational prime p.

Example 55.2.3 (Frobenius elements in (3√ --
  2))
With those remarks, here is a more involved example of a Frobenius map. Let K = (3√--
 2) be the splitting field of

              --       --        --
t3 − 2 = (t− √32)(t − ω√32)(t− ω2√32).

Thus K∕is Galois. We’ve seen in an earlier example that

                                6    5    3
𝒪K =  ℤ[𝜀] where   𝜀 is a root of t + 3t − 5t + 3t+ 1.

Let’s consider the prime 5 which factors (trust me here) as

          2           2             2
(5) = (5,𝜀 + 𝜀 + 2)(5,𝜀  + 3𝜀+ 3)(5,𝜀 + 4𝜀 + 1) = 𝔭1𝔭2𝔭3.

Note that all the prime ideals have inertial degree 2. Thus Frob𝔭i will have order 2 for each i.

Note that

Gal(K ∕ℚ ) = permutations of { 3√2,ω 3√2,ω2 3√2-} ∼= S3.

In this S3 there are 3 elements of order three: fixing one root and swapping the other two. These correspond to each of Frob𝔭1, Frob𝔭2, Frob𝔭3.

In conclusion, the conjugacy class {Frob𝔭1,Frob𝔭2,Frob𝔭3} associated to (5) is the cycle type ()(∙∙) in S3.

55.3  Chebotarev density theorem

Natural question: can we represent every conjugacy class in this way? In other words, is every element of G equal to Frob𝔭 for some 𝔭?

Miraculously, not only is the answer “yes”, but in fact it does so in the nicest way possible: the Frob𝔭’s are “equally distributed” when we pick a random 𝔭.

Theorem 55.3.1 (Chebotarev density theorem over )
Let C be a conjugacy class of G = Gal(K∕). The density of (unramified) primes p such that {Frob𝔭𝔭 above p} = C is exactly |C ||G |. In particular, for any σ G there are infinitely many rational primes p with 𝔭 above p so that Frob𝔭 = σ.

By density, I mean that the proportion of primes p x that work approaches |C|
|G|- as x →∞. Note that I’m throwing out the primes that ramify in K. This is no issue, since the only primes that ramify are those dividing ΔK, of which there are only finitely many.

In other words, if I pick a random prime p and look at the resulting conjugacy class, it’s a lot like throwing a dart at G: the probability of hitting any conjugacy class depends just on the size of the class.

Remark 55.3.2 — Happily, this theorem (and preceding discussion) also works if we replace K∕with any Galois extension K∕F; in that case we replace “𝔭 over p” with “𝔓 over 𝔭”. In that case, we use N(𝔭) x rather than p x as the way to define density.

55.4  Example: Frobenius elements of cyclotomic fields

Let q be a prime, and consider L = (ζq), with ζq a primitive qth root of unity. You should recall from various starred problems that

This is surprisingly nice, because elements of Gal(L∕) look a lot like Frobenius elements already. Specifically:

Lemma 55.4.1 (Cyclotomic Frobenius elements)
In the cyclotomic setting L = (ζq), let p be a rational unramified prime and 𝔭 above it. Then

Frob 𝔭 = σp.

Proof. Observe that σp satisfies the functional equation (check on generators). Done by uniqueness. □

Question 55.4.2. Conclude that a rational prime p splits completely in 𝒪L if and only if p 1 (mod m).

55.5  Frobenius elements behave well with restriction

Let L∕and K∕be Galois extensions, and consider the setup

SVG-Viewer needed.

Here 𝔭 is above (p) and 𝔓 is above 𝔭. We may define

Frob 𝔭: K → K   and   Frob𝔓: L → L

and want to know how these are related.

Theorem 55.5.1 (Restrictions of Frobenius elements)
Assume L∕and K∕are both Galois. Let 𝔓 and 𝔭 be unramified as above. Then Frob𝔓K = Frob𝔭, i.e. for every α K,

Frob𝔭(α) = Frob𝔓(α).

Proof. We know

Frob𝔓(α ) ≡ αp (mod  𝔓 )  ∀α ∈ 𝒪L

from the definition.

Question 55.5.2. Deduce that

            p
Frob𝔓(α) ≡ α  (mod 𝔭)  ∀α ∈ 𝒪K.

(This is weaker than the previous statement in two ways!)

Thus Frob𝔓 restricted to 𝒪K satisfies the characterizing property of Frob𝔭. □

In short, the point of this section is that

Frobenius elements upstairs restrict to Frobenius elements downstairs.

55.6  Application: Quadratic reciprocity

We now aim to prove:

Theorem 55.6.1 (Quadratic reciprocity)
Let p and q be distinct odd primes. Then

(  ) (  )
  p-   q- = (− 1)p−12 ⋅q−21.
  q    p

(See, e.g. [?] for an exposition on quadratic reciprocity, if you’re not familiar with it.)

55.6.i  Step 1: Setup

For this proof, we first define

L = ℚ (ζq)

where ζq is a primitive qth root of unity. Then L∕is Galois, with Galois group G.

Question 55.6.2. Show that G has a unique subgroup H of order two.

In fact, we can describe it exactly: viewing G∼
=(∕q)×, we have

H = { σ | n quadratic residue mod q}.
       n

By the fundamental theorem of Galois Theory, there ought to be a degree 2 extension of inside (ζq) (that is, a quadratic field). Call it (√ -∗
  q), for q squarefree:

SVG-Viewer needed.

Exercise 55.6.3. Note that if a rational prime ramifies in K, then it ramifies in L. Use this to show that

 ∗          ∗
q = ±q and q ≡ 1  (mod 4).

Together these determine the value of q.

(Actually, it is true in general ΔK divides ΔL in a tower L∕K∕.)

55.6.ii  Step 2: Reformulation

Now we are going to prove:

Theorem 55.6.4 (Quadratic reciprocity, equivalent formulation)
For distinct odd primes p, q we have

( p)   ( q∗)
  q- =   p-  .

Exercise 55.6.5. Using the fact that (−-1)
  p = (1)p−1
 2, show that this is equivalent to quadratic reciprocity as we know it.

We look at the rational prime p in . Either it splits into two in K or is inert; either way let 𝔭 be a prime factor in the resulting decomposition (so 𝔭 is either p ⋅𝒪K in the inert case, or one of the primes in the split case). Then let 𝔓 be above 𝔭. It could possibly also split in K: the picture looks like

SVG-Viewer needed.

Question 55.6.6. Why is p not ramified in either K or L?

55.6.iii  Step 3: Introducing the Frobenius

Now, we take the Frobenius

σ  = Frob  ∈ Gal (L ∕ℚ ).
  p      𝔓

We claim that

Frob𝔓 ∈ H  ⇐⇒   p splits in K.

To see this, note that Frob𝔓 is in H if and only if it acts as the identity on K. But Frob𝔓K is Frob𝔭! So

Frob𝔓 ∈ H  ⇐ ⇒  Frob𝔭 = idK .

Finally note that Frob𝔭 has order 1 if p splits (𝔭 has inertial degree 1) and order 2 if p is inert. This completes the proof of the claim.

55.6.iv  Finishing up

We already know by ??  that Frob𝔓 = σp H if and only if p is a quadratic residue. On the other hand,

Exercise 55.6.7. Show that p splits in 𝒪K = [1
2(1 + √q∗)] if and only if (  )
 q∗
  p = 1. (Use the factoring algorithm. You need the fact that p2 here.)

In other words

( p)                                   [     ∘  --]      ( q∗)
  -- = 1  ⇐⇒   σp ∈ H ⇐ ⇒  p splits in ℤ  12(1+    q∗)  ⇐ ⇒    --  =  1.
  q                                                         p

This completes the proof.

55.7  Frobenius elements control factorization

Prototypical example for this section: Frob𝔭 controlled the splitting of p in the proof of quadratic reciprocity; the same holds in general.

In the proof of quadratic reciprocity, we used the fact that Frobenius elements behaved well with restriction in order to relate the splitting of p with properties of Frob𝔭.

In fact, there is a much stronger statement for any intermediate field E K which works even if E∕is not Galois. It relies on the notion of a factorization pattern. Here is how it goes.

Set n = [E : ], and let p be a rational prime unramified in K. Then p can be broken in E as

p⋅ 𝒪E = 𝔭1𝔭2 ...𝔭g

with inertial degrees f1, …, fg: (these inertial degrees might be different since E∕isn’t Galois). The numbers f1 + ⋅⋅⋅ + fg = n form a partition of the number n. For example, in the quadratic reciprocity proof we had n = 2, with possible partitions 1 + 1 (if p split) and 2 (if p was inert). We call this the factorization pattern of p in E.

Next, we introduce a Frobenius Frob𝔓 above (p), all the way in K; this is an element of G = Gal(K∕). Then let H be the group corresponding to the field E. Diagram:

SVG-Viewer needed.

Then Frob𝔓 induces a permutation of the n left cosets gH by left multiplication (after all, Frob𝔓 is an element of G too!). Just as with any permutation, we may look at the resulting cycle decomposition, which has a natural “cycle structure”: a partition of n.

The theorem is that these coincide:

Theorem 55.7.1 (Frobenius elements control decomposition)
Let E K an extension of number fields and assume K∕is Galois (though E∕need not be). Pick an unramified rational prime p; let G = Gal(K∕) and H the corresponding intermediate subgroup. Finally, let 𝔓 be a prime above p in K.

Then the factorization pattern of p in E is given by the cycle structure of Frob𝔓 acting on the left cosets of H.

Often, we take E = K, in which case this is just asserting that the decomposition of the prime p is controlled by a Frobenius element over it.

An important special case is when E = (α), because as we will see it is let us determine how the minimal polynomial of α factors modulo p. To motivate this, let’s go back a few chapters and think about the Factoring Algorithm.

Let α be an algebraic integer and f its minimal polynomial (of degree n). Set E = (α) (which has degree n over ). Suppose we’re lucky enough that 𝒪E = [α], i.e. that E is monogenic. Then we know by the Factoring Algorithm, to factor any p in E, all we have to do is factor f modulo p, since if f = f1e1fgeg (mod p) then we have

     ∏       ∏
(p) =   𝔭i =   (fi(α),p)ei.
      i       i

This gives us complete information about the ramification indices and inertial degrees; the ei are the ramification indices, and deg fi are the inertial degrees (since 𝒪E𝔭i∼=𝔽p[X](fi(X))).

In particular, if p is unramified then all the ei are equal to 1, and we get

n = deg f = degf1 + degf2 + ⋅⋅⋅+ degfg.

Once again we have a partition of n; we call this the factorization pattern of f modulo p. So, to see the factorization pattern of an unramified p in 𝒪E, we just have to know the factorization pattern of the f (mod p).

Turning this on its head, if we want to know the factorization pattern of f (mod p), we just need to know how p decomposes. And it turns out these coincide even without the assumption that E is monogenic.

Theorem 55.7.2 (Frobenius controls polynomial factorization)
Let α be an algebraic integer with minimal polynomial f, and let E = (α). Then for any prime p unramified in the splitting field K of f, the following coincide:

(i)
The factorization pattern of p in E.
(ii)
The factorization pattern of f (mod p).
(iii)
The cycle structure associated to the action of Frob𝔓 Gal(K∕) on the roots of f, where 𝔓 is above p in K.

Example 55.7.3 (Factoring x3 2 (mod 5))
Let α = 3√ --
  2 and f = x3 2, so E = (3√--
 2). Set p = 5 and let finally, let K = (3√ --
  2) be the splitting field. Setup:

SVG-Viewer needed.

The three claimed objects now all have shape 2 + 1:

(i)
By the Factoring Algorithm, we have (5) = (5,√3--
  2 3)(5,9 + 33√ --
  2 + √3--
  4).
(ii)
We have x3 2 (x 3)(x2 + 3x + 9) (mod 5).
(iii)
We saw before that Frob𝔓 = ()(∙∙).

Sketch of Proof. Letting n = deg f. Let H be the subgroup of G = Gal(K∕) corresponding to E, so [G : E] = n. Pictorially, we have

SVG-Viewer needed.

We claim that (i), (ii), (iii) are all equivalent to

(iv) The pattern of the action of Frob𝔓 on the G∕H.

In other words we claim the cosets correspond to the n roots of f in K. Indeed H is just the set of τ G such that τ(α) = α, so there’s a bijection between the roots and the cosets G∕H by τH↦→τ(α). Think of it this way: if G = Sn, and H = {τ : τ(1) = 1}, then G∕H has order n!(n1)! = n and corresponds to the elements {1,,n}. So there is a natural bijection from (iii) to (iv).

The fact that (i) is in bijection to (iv) was the previous theorem, ?? . The correspondence (i) (ii) is a fact of Galois theory, so we omit the proof here. □

All this can be done in general with replaced by F; for example, in [?].

55.8  Example application: IMO 2003 problem 6

As an example of the power we now have at our disposal, let’s prove:

PIC

Problem 6. Let p be a prime number. Prove that there exists a prime number q such that for every integer n, the number np p is not divisible by q.

We will show, much more strongly, that there exist infinitely many primes q such that Xp p is irreducible modulo q.

Solution. Okay! First, we draw the tower of fields

       p√ --
ℚ ⊆ ℚ (  p) ⊆ K

where K is the splitting field of f(x) = xp p. Let E = (√ --
p p) for brevity and note it has degree [E : ] = p. Let G = Gal(K∕).

Question 55.8.1. Show that p divides the order of G. (Look at E.)

Hence by Cauchy’s theorem (?? , which is a purely group-theoretic fact) we can find a σ G of order p. By Chebotarev, there exist infinitely many rational (unramified) primes qp and primes 𝔔 ⊆𝒪K above q such that Frob𝔔 = σ. (Yes, that’s an uppercase Gothic Q. Sorry.)

We claim that all these q work.

By ?? , the factorization of f (mod q) is controlled by the action of σ = Frob𝔔 on the roots of f. But σ has prime order p in G! So all the lengths in the cycle structure have to divide p. Thus the possible factorization patterns of f are

p = 1 + 1+ ⋅⋅⋅+ 1   or  p = p.
    ◟-----◝◜----◞
        p times

So we just need to rule out the p = 1 + ⋅⋅⋅ + 1 case now: this only happens if f breaks into linear factors mod q. Intuitively this edge case seems highly unlikely (are we really so unlucky that f factors into linear factors when we want it to be irreducible?). And indeed this is easy to see: this means that σ fixes all of the roots of f in K, but that means σ fixes K altogether, and hence is the identity of G, contradiction. □

Remark 55.8.2 — In fact K = (√p--
  p p), and |G | = p(p1). With a little more group theory, we can show that in fact the density of primes q that work is 1
p.

55.9  A few harder problems to think about

Problem 55A. Show that for an odd prime p,

( 2)        1(p2− 1)
  p-  = (− 1)8    .

Problem 55B. Let f be a nonconstant polynomial with integer coefficients. Suppose f (mod p) splits completely into linear factors for all sufficiently large primes p. Show that f splits completely into linear factors.

Problem 55C (Dirichlet’s theorem on arithmetic progressions). Let a and m be relatively prime positive integers. Show that the density of primes p a (mod m) is exactly --1-
ϕ(m).

Problem 55D. Let n be an odd integer which is not a prime power. Show that the nth cyclotomic polynomial is not irreducible modulo any rational prime.

Problem 55E (Putnam 2012 B6).     PICPICLet p be an odd prime such that p 2 (mod 3). Let π be a permutation of 𝔽p by π(x) = x3 (mod p). Show that π is even if and only if p 3 (mod 4).

56  Bonus: A Bit on Artin Reciprocity

In this chapter, I’m going to state some big theorems of global class field theory and use them to deduce the Kronecker-Weber plus Hilbert class fields. No proofs, but hopefully still appreciable. For experts: this is global class field theory, without ideles.

Here’s the executive summary: let K be a number field. Then all abelian extensions L∕K can be understood using solely information intrinsic to K: namely, the ray class groups (generalizing ideal class groups).

56.1  Infinite primes

Prototypical example for this section: (√ ---
  − 5) has a complex infinite prime, (√--
 5) has two real infinite ones.

Let K be a number field of degree n and signature (r,s). We know what a prime ideal of 𝒪K is, but we now allow for the so-called infinite primes, which I’ll describe using the embeddings.1 Recall there are n embeddings σ : K , which consist of

Hence r + 2s = n. The first class of embeddings form the real infinite primes, while the complex infinite primes are the second type. We say K is totally real (resp totally complex) if all its infinite primes are real (resp complex).

Example 56.1.1 (Examples of infinite primes)

56.2  Modular arithmetic with infinite primes

A modulus is a formal product

    ∏   ν(𝔭)
𝔪 =    𝔭
     𝔭

where the product runs over all primes, finite and infinite. (Here ν(𝔭) is a nonnegative integer, of which only finitely many are nonzero.) We also require that

Obviously, every 𝔪 can be written as 𝔪 = 𝔪0𝔪 by separating the finite from the (real) infinite primes.

We say a b (mod 𝔪) if

Of course, a b (mod 𝔪) means a b modulo each prime power in 𝔪. With this, we can define a generalization of the class group:

Definition 56.2.1. Let 𝔪 be a modulus of a number field K.

Finally define the ray class group of 𝔪 to be CK(𝔪) = IK(𝔪)∕PK(𝔪).

This group is known to always be finite. Note the usual class group is CK(1).

One last definition that we’ll use right after Artin reciprocity:

Definition 56.2.2. A congruence subgroup of 𝔪 is a subgroup H with

PK (𝔪 ) ⊆ H ⊆ IK (𝔪).

Thus CK(𝔪) is a group which contains a lattice of various quotients IK(𝔪)∕H, where H is a congruence subgroup.

This definition takes a while to get used to, so here are examples.

Example 56.2.3 (Ray class groups in , finite modulus)
Consider K = with infinite prime . Then

More generally,

                ×
Cℚ(m ) = (ℤ ∕m ℤ) ∕{±1 }.

Example 56.2.4 (Ray class groups in , infinite moduli)
Consider K = with infinite prime again.

More generally,

                  ×
Cℚ(m ∞ ) = (ℤ ∕m ℤ) .

56.3  Infinite primes in extensions

I want to emphasize that everything above is intrinsic to a particular number field K. After this point we are going to consider extensions L∕K but it is important to keep in mind the distinction that the concept of modulus and ray class group are objects defined solely from K rather than the above L.

Now take a Galois extension L∕K of degree m. We already know prime ideals 𝔭 of K break into a product of prime ideals 𝔓 of K in L in a nice way, so we want to do the same thing with infinite primes. This is straightforward: each of the n infinite primes σ : K lifts to m infinite primes τ : L , by which I mean the diagram

SVG-Viewer needed.

commutes. Hence like before, each infinite prime σ of K has m infinite primes τ of L which lie above it.

For a real prime σ of K, any of the resulting τ above it are complex, we say that the prime σ ramifies in the extension L∕K. Otherwise it is unramified in L∕K. An infinite prime of K is always unramified in L∕K. In this way, we can talk about an unramified Galois extension L∕K: it is one where all primes (finite or infinite) are unramified.

Example 56.3.1 (Ramification of )
Let be the real infinite prime of .

Note also that if K is totally complex then any extension L∕K is unramified.

56.4  Frobenius element and Artin symbol

Recall the key result:

Theorem 56.4.1 (Frobenius element)
Let L∕K be a Galois extension. If 𝔭 is a prime unramified in K, and 𝔓 a prime above it in L. Then there is a unique element of Gal(L∕K), denoted Frob𝔓, obeying

            N 𝔭
Frob𝔓(α ) ≡ α    (mod 𝔓 )    ∀α ∈ 𝒪L.

Example 56.4.2 (Example of Frobenius elements)
Let L = (i), K = . We have Gal(L∕K)∼
=2.

If p is an odd prime with 𝔓 above it, then Frob𝔓 is the unique element such that

       p
(a+  bi) ≡  Frob𝔓(a + bi)  (mod  𝔓)

in [i]. In particular,

               {
Frob (i) = ip = i    p ≡ 1  (mod  4)
    𝔓           − i  p ≡ 3  (mod  4).

From this we see that Frob𝔓 is the identity when p 1 (mod 4) and Frob𝔓 is complex conjugation when p 3 (mod 4).

Example 56.4.3 (Cyclotomic Frobenius element)
Generalizing previous example, let L = (ζ) and K = , with ζ an mth root of unity. It’s well-known that L∕K is unramified outside and prime factors of m. Moreover, the Galois group Gal(L∕K) is (∕m)×: the Galois group consists of elements of the form

σn : ζ ↦→ ζn

and Gal(L∕K) = {σn | n ∈ (ℤ ∕m ℤ)×}.

Then it follows just like before that if p n is prime and 𝔓 is above p

Frob𝔓(x) = σp.

An important property of the Frobenius element is its order is related to the decomposition of 𝔭 in the higher field L in the nicest way possible:

Lemma 56.4.4 (Order of the Frobenius element)
The Frobenius element Frob𝔓 Gal(L∕K) of an extension L∕K has order equal to the inertial degree of 𝔓, that is,

ordFrob𝔓 =  f(𝔓 | 𝔭).

In particular, Frob𝔓 = id if and only if 𝔭 splits completely in L∕K.

Proof. We want to understand the order of the map T : x↦→xN𝔭 on the field 𝒪K𝔓. But the latter is isomorphic to the splitting field of XN𝔓 X in 𝔽p, by Galois theory of finite fields. Hence the order is log N𝔭(N𝔓) = f(𝔓𝔭). □

Exercise 56.4.5. Deduce from this that the rational primes which split completely in (ζ) are exactly those with p 1 (mod m). Here ζ is an mth root of unity.

The Galois group acts transitively among the set of 𝔓 above a given 𝔭, so that we have

Frob σ(𝔓) = σ ∘(Frob𝔭)∘ σ−1.

Thus Frob𝔓 is determined by its underlying 𝔭 up to conjugation.

In class field theory, we are interested in abelian extensions, i.e. those for which Gal(L∕K) is Galois. Here the theory becomes extra nice: the conjugacy classes have size one.

Definition 56.4.6. Assume L∕K is an abelian extension. Then for a given unramified prime 𝔭 in K, the element Frob𝔓 doesn’t depend on the choice of 𝔓. We denote the resulting Frob𝔓 by the Artin symbol,

(      )
  L∕K--
    𝔭    .

The definition of the Artin symbol is written deliberately to look like the Legendre symbol. To see why:

Example 56.4.7 (Legendre symbol subsumed by Artin symbol)
Suppose we want to understand (2∕p) 2p−21 where p > 2 is prime. Consider the element

(   √ --   )
  ℚ ( 2)∕ℚ            √--
  ---pℤ----  ∈ Gal(ℚ ( 2)∕ℚ ).

It is uniquely determined by where it sends a. But in fact we have

( ℚ (√2)∕ℚ ) (√ -)   (√ -)p    p−1  √--  ( 2) √ --
  ---------     2  ≡    2   ≡ 2-2- ⋅ 2 ≡   --   2  (mod  𝔓)
     pℤ                                    p

where ( )
 2p is the usual Legendre symbol, and 𝔓 is above p in (√ --
  2). Thus the Artin symbol generalizes the quadratic Legendre symbol.

Example 56.4.8 (Cubic Legendre symbol subsumed by Artin symbol)
Similarly, it also generalizes the cubic Legendre symbol. To see this, assume 𝜃 is primary in K = (√---
 − 3) = (ω) (thus 𝒪K = [ω] is Eisenstein integers). Then for example

(    √--    )                                 (  )
  K-( 32-)∕K-  ( 3√-)   (√3-)N (𝜃)    N𝜃−1 √ --    2-  3√ --
    𝜃 𝒪K         2  ≡    2     ≡ 2  3  ⋅  2 ≡   𝜃     2. (mod  𝔓)
                                                  3

where 𝔓 is above p in K(3√--
 2).

56.5  Artin reciprocity

Now, we further capitalize on the fact that Gal(L∕K) is abelian. For brevity, in what follows let Ram(L∕K) denote the primes of K (either finite or infinite) which ramify in L.

Definition 56.5.1. Let L∕K be an abelian extension and let 𝔪 be divisible by every prime in Ram(L∕K). Then since L∕K is abelian we can extend the Artin symbol multiplicatively to a map

( L ∕K )
  -----  : IK (𝔪 ) ↠ Gal(L ∕K).
    ∙

This is called the Artin map, and it is surjective (for example by Chebotarev Density). Thus we denote its kernel by

H (L∕K, 𝔪) ⊆ IK (𝔪 ).

In particular we have Gal(L∕K)∼=IK(𝔪)∕H(L∕K,𝔪).

We can now present the long-awaited Artin reciprocity theorem.

Theorem 56.5.2 (Artin reciprocity)
Let L∕K be an abelian extension. Then there is a modulus 𝔣 = 𝔣(L∕K), divisible by exactly the primes of Ram(L∕K), such that: for any modulus 𝔪 divisible by all primes of Ram(L∕K), we have

PK(𝔪 ) ⊆ H (L ∕K,𝔪 ) ⊆ IK (𝔪) if and only if 𝔣 | 𝔪.

We call 𝔣 the conductor of L∕K.

So the conductor 𝔣 plays a similar role to the discriminant (divisible by exactly the primes which ramify), and when 𝔪 is divisible by the conductor, H(L∕K,𝔪) is a congruence subgroup.

Here’s the reason this is called a “reciprocity” theorem. Recalling that CK(𝔣) = IK(𝔣)∕PK(𝔣), the above theorem tells us we get a sequence of maps

SVG-Viewer needed.

Consequently:

For primes 𝔭 IK(𝔣), (    )
 L∕K𝔭- depends only on “𝔭 (mod 𝔣)”.

Let’s see how this result relates to quadratic reciprocity.

Example 56.5.3 (Artin reciprocity implies quadratic reciprocity)
The big miracle of quadratic reciprocity states that: for a fixed (squarefree) a, the Legendre symbol ( a)
  p should only depend the residue of p modulo something. Let’s see why Artin reciprocity tells us this a priori.

Let L = (√a-), K = . Then we’ve already seen that the Artin symbol

(   √--   )
 ℚ-(-a-)∕ℚ-
     ∙

is the correct generalization of the Legendre symbol. Thus, Artin reciprocity tells us that there is a conductor 𝔣 = 𝔣((√a-)) such that (  √ -  )
  ℚ(-a)∕ℚ-
    p depends only on the residue of p modulo 𝔣, which is what we wanted.

Here is an example along the same lines.

Example 56.5.4 (Cyclotomic field)
Let ζ be a primitive mth root of unity. For primes p, we know that Frobp Gal((ζ)) is “exactly” p (mod m). Let’s translate this idea into the notation of Artin reciprocity.

We are going to prove

                             { a                     }
H (ℚ (ζ)∕ℚ, m ∞ ) = Pℚ(m ∞ ) =  bℤ | a∕b ≡ 1 (mod  m ) .

This is the generic example of achieving the lower bound in Artin reciprocity. It also implies that 𝔣((ζ))m.

It’s well-known (ζ)is unramified outside finite primes dividing m, so that the Artin symbol is defined on IK(𝔪). Now the Artin map is given by

SVG-Viewer needed.

So we see that the kernel of this map is trivial, i.e. it is given by the identity of the Galois group, corresponding to 1 (mod m). On the other hand, we’ve also computed P(m) already, so we have the desired equality.

In fact, we also have the following “existence theorem”: every congruence subgroup appears uniquely once we fix 𝔪.

Theorem 56.5.5 (Takagi existence theorem)
Fix K and let 𝔪 be a modulus. Consider any congruence subgroup H, i.e. 

PK (𝔪 ) ⊆ H ⊆ IK (𝔪).

Then H = H(L∕K,𝔪) for a unique abelian extension L∕K.

Finally, such subgroups reverse inclusion in the best way possible:

Lemma 56.5.6 (Inclusion-reversing congruence subgroups)
Fix a modulus 𝔪. Let L∕K and M∕K be abelian extensions and suppose 𝔪 is divisible by the conductors of L∕K and M∕K. Then

L ⊆ M    if and only if H (M ∕K, 𝔪 ) ⊆ H (L∕K, 𝔪 ).

Here by L M we mean that L is isomorphic to some subfield of M.

Sketch of proof. Let us first prove the equivalence with 𝔪 fixed. In one direction, assume L M; one can check from the definitions that the diagram

SVG-Viewer needed.

commutes, because it suffices to verify this for prime powers, which is just saying that Frobenius elements behave well with respect to restriction. Then the inclusion of kernels follows directly. The reverse direction is essentially the Takagi existence theorem. □

Note that we can always take 𝔪 to be the product of conductors here.

To finish, here is a quote from Emil Artin on his reciprocity law:

I will tell you a story about the Reciprocity Law. After my thesis, I had the idea to define L-series for non-abelian extensions. But for them to agree with the L-series for abelian extensions, a certain isomorphism had to be true. I could show it implied all the standard reciprocity laws. So I called it the General Reciprocity Law and tried to prove it but couldn’t, even after many tries. Then I showed it to the other number theorists, but they all laughed at it, and I remember Hasse in particular telling me it couldn’t possibly be true.

Still, I kept at it, but nothing I tried worked. Not a week went by — for three years! — that I did not try to prove the Reciprocity Law. It was discouraging, and meanwhile I turned to other things. Then one afternoon I had nothing special to do, so I said, ‘Well, I try to prove the Reciprocity Law again.’ So I went out and sat down in the garden. You see, from the very beginning I had the idea to use the cyclotomic fields, but they never worked, and now I suddenly saw that all this time I had been using them in the wrong way — and in half an hour I had it.

56.6  A few harder problems to think about

Problem 56A (Kronecker-Weber theorem). Let L be an abelian extension of . Then L is contained in a cyclic extension (ζ) where ζ is an mth root of unity (for some m).

Problem 56B (Hilbert class field). Let K be any number field. Then there exists a unique abelian extension E∕K which is unramified at all primes (finite or infinite) and such that

We call E the Hilbert class field of K.

Part XV
Algebraic Topology I: Homotopy

57  Some topological constructions

In this short chapter we briefly describe some common spaces and constructions in topology that we haven’t yet discussed.

57.1  Spheres

Recall that

     {                             }
Sn =   (x0,...,xn) | x20 + ⋅⋅⋅+ x2n = 1 ⊂  ℝn+1

is the surface of an n-sphere while

  n+1   {             2         2    }    n+1
D    =   (x0,...,xn ) | x0 + ⋅⋅⋅+ xn ≤ 1 ⊂ ℝ

is the corresponding closed ball (So for example, D2 is a disk in a plane while S1 is the unit circle.)

Exercise 57.1.1. Show that the open ball Dn Sn1 is homeomorphic to n.

In particular, S0 consists of two points, while D1 can be thought of as the interval [1,1].

57.2  Quotient topology

Prototypical example for this section: Dn∕Sn1 = Sn, or the torus.

Given a space X, we can identify some of the points together by any equivalence relation ; for an x X we denote its equivalence class by [x]. Geometrically, this is the space achieved by welding together points equivalent under .

Formally,

Definition 57.2.1. Let X be a topological space, and an equivalence relation on the points of X. Then X∕ is the space whose

As far as I can tell, this definition is mostly useless for intuition, so here are some examples.

Example 57.2.2 (Interval modulo endpoints)
Suppose we take D1 = [1,1] and quotient by the equivalence relation which identifies the endpoints 1 and 1. (Formally, x y (x = y) or {x,y} = {−1,1}.) In that case, we simply recover S1:

Observe that a small open neighborhood around 1 1 in the quotient space corresponds to two half-intervals at 1 and 1 in the original space D1. This should convince you the definition we gave is the right one.

Example 57.2.3 (More quotient spaces)
Convince yourself that:

One special case that we did above:

Definition 57.2.4. Let A X. Consider the equivalence relation which identifies all the points of A with each other while leaving all remaining points inequivalent. (In other words, x y if x = y or x,y A.) Then the resulting quotient space is denoted X∕A.

So in this notation,

Dn∕Sn −1 = Sn.

Abuse of Notation 57.2.5. Note that I’m deliberately being sloppy, and saying “Dn∕Sn1 = Sn” or “Dn∕Sn1 is Sn”, when I really ought to say “Dn∕Sn1 is homeomorphic to Sn”. This is a general theme in mathematics: objects which are homoeomorphic/isomorphic/etc. are generally not carefully distinguished from each other.

57.3  Product topology

Prototypical example for this section: × is 2, S1 × S1 is the torus.

Definition 57.3.1. Given topological spaces X and Y , the product topology on X ×Y is the space whose

Remark 57.3.2 — It is not hard to show that, in fact, one need only consider basis elements for U and V . That is to say,

{U × V | U,V basis elements for X, Y}

is also a basis for X × Y .

We really do need to fiddle with the basis: in × , an open unit disk better be open, despite not being of the form U × V .

This does exactly what you think it would.

Example 57.3.3 (The unit square)
Let X = [0,1] and consider X × X. We of course expect this to be the unit square. Pictured below is an open set of X × X in the basis.

Exercise 57.3.4. Convince yourself this basis gives the same topology as the product metric on X × X. So this is the “right” definition.

Example 57.3.5 (More product spaces)

(a)
× is the Euclidean plane.
(b)
S1 × [0,1] is a cylinder.
(c)
S1 × S1 is a torus! (Why?)

57.4  Disjoint union and wedge sum

Prototypical example for this section: S1 S1 is the figure eight.

The disjoint union of two spaces is geometrically exactly what it sounds like: you just imagine the two spaces side by side. For completeness, here is the formal definition.

Definition 57.4.1. Let X and Y be two topological spaces. The disjoint union, denoted X Y , is defined by

Exercise 57.4.2. Show that the disjoint union of two nonempty spaces is disconnected.

More interesting is the wedge sum, where two topological spaces X and Y are fused together only at a single base point.

Definition 57.4.3. Let X and Y be topological spaces, and x0 X and y0 Y be points. We define the equivalence relation by declaring x0 y0 only. Then the wedge sum of two spaces is defined as

X ∨ Y =  (X  ∐ Y)∕∼.

Example 57.4.4 (S1 S1 is a figure eight)
Let X = S1 and Y = S1, and let x0 X and y0 Y be any points. Then X Y is a “figure eight”: it is two circles fused together at one point.

Abuse of Notation 57.4.5. We often don’t mention x0 and y0 when they are understood (or irrelevant). For example, from now on we will just write S1 S1 for a figure eight.

Remark 57.4.6 — Annoyingly, in LaTeX \wedge gives instead of (which is \vee). So this really should be called the “vee product”, but too late.

57.5  CW complexes

Using this construction, we can start building some spaces. One common way to do so is using a so-called CW complex. Intuitively, a CW complex is built as follows:

The resulting space X is the CW-complex. The set Xk is called the k-skeleton of X. Each Dk is called a k-cell; it is customary to denote it by eαk where α is some index. We say that X is finite if only finitely many cells were used.

Abuse of Notation 57.5.1. Technically, most sources (like [?]) allow one to construct infinite-dimensional CW complexes. We will not encounter any such spaces in the Napkin.

Example 57.5.2 (D2 with 2 + 2 + 1 and 1 + 1 + 1 cells)

(a)
First, we start with X0 having two points ea0 and eb0. Then, we join them with two 1-cells D1 (green), call them ec1 and ed1. The endpoints of each 1-cell (the copy of S0) get identified with distinct points of X0; hence X1∼=S1. Finally, we take a single 2-cell e2 (yellow) and weld it in, with its boundary fitting into the copy of S1 that we just drew. This gives the figure on the left.
(b)
In fact, one can do this using just 1 + 1 + 1 = 3 cells. Start with X0 having a single point e0. Then, use a single 1-cell e1, fusing its two endpoints into the single point of X0. Then, one can fit in a copy of S1 as before, giving D2 as on the right.

Example 57.5.3 (Sn as a CW complex)

(a)
One can obtain Sn (for n 1) with just two cells. Namely, take a single point e0 for X0, and to obtain Sn take Dn and weld its entire boundary into e0.

We already saw this example in the beginning with n = 2, when we saw that the sphere S2 was the result when we fuse the boundary of a disk D2 together.

(b)
Alternatively, one can do a “hemisphere” construction, by constructing Sn inductively using two cells in each dimension. So S0 consists of two points, then S1 is obtained by joining these two points by two segments (1-cells), and S2 is obtained by gluing two hemispheres (each a 2-cell) with S1 as its equator.

Definition 57.5.4. Formally, for each k-cell eαk we want to add to Xk, we take its boundary Sαk1 and weld it onto Xk1 via an attaching map Sαk1 Xk1. Then

             (      )
  k     k−1    ∐   k
X   = X    ∐      eα  ∕∼
                α

where identifies each boundary point of eαk with its image in Xk1.

57.6  The torus, Klein bottle, ℝℙn, ℂℙn

We now present four of the most import examples of CW complexes.

57.6.i  The torus

The torus can be formed by taking a square and identifying the opposite edges in the same direction: if you walk off the right edge, you re-appear at the corresponding point in on the left edge. (Think Asteroids from Atari!)

Thus the torus is ()2∼
=S1 × S1.

Note that all four corners get identified together to a single point. One can realize the torus in 3-space by treating the square as a sheet of paper, taping together the left and right (red) edges to form a cylinder, then bending the cylinder and fusing the top and bottom (blue) edges to form the torus.

PIC
Image from [?]

The torus can be realized as a CW complex with

We say that aba1b1 is the attaching word; this shorthand will be convenient later on.

57.6.ii  The Klein bottle

The Klein bottle is defined similarly to the torus, except one pair of edges is identified in the opposite manner, as shown.

Unlike the torus one cannot realize this in 3-space without self-intersecting. One can tape together the red edges as before to get a cylinder, but to then fuse the resulting blue circles in opposite directions is not possible in 3D. Nevertheless, we often draw a picture in 3-dimensional space in which we tacitly allow the cylinder to intersect itself.

PIC

PIC

Image from [?, ?]

Like the torus, the Klein bottle is realized as a CW complex with

57.6.iii  Real projective space

Let’s start with n = 2. The space ℝℙ2 is obtained if we reverse both directions of the square from before, as shown.

However, once we do this the fact that the original polygon is a square is kind of irrelevant; we can combine a red and blue edge to get the single purple edge. Equivalently, one can think of this as a circle with half its circumference identified with the other half:

The resulting space should be familiar to those of you who do projective (Euclidean) geometry. Indeed, there are several possible geometric interpretations:

Exercise 57.6.1. Observe that these formulations are equivalent by considering the plane z = 1 in 3, and intersecting each line in the first formulation with this plane.

We can also express ℝℙ2 using coordinates: it is the set of triples (x : y : z) of real numbers not all zero up to scaling, meaning that

(x : y : z) = (λx : λy : λz )

for any λ0. Using the “lines through the origin in 3” interpretation makes it clear why this coordinate system gives the right space. The points at infinity are those with z = 0, and any point with z0 gives a Cartesian point since

           ( x  y   )
(x : y : z) =--:--: 1
             z  z

hence we can think of it as the Cartesian point (x
z,y
z).

In this way we can actually define real-projective n-space, ℝℙn for any n, as either

(i)
The set of lines through the origin in n+1,
(ii)
Using n + 1 coordinates as above, or
(iii)
As n augmented with points at infinity, which themselves form a copy of ℝℙn1.

As a possibly helpful example, we give all three pictures of ℝℙ1.

Example 57.6.2 (Real projective 1-Space)
ℝℙ1 can be thought of as S1 modulo the relation the antipodal points are identified. Projecting onto a tangent line, we see that we get a copy of plus a single point at infinity, corresponding to the parallel line (drawn in cyan below).

Thus, the points of ℝℙ1 have two forms:

So, we can literally write

ℝ ℙ1 = ℝ ∪{∞  }.

Note that ℝℙ1 is also the boundary of ℝℙ2. In fact, note also that topologically we have

   1    1
ℝ ℙ  ∼= S

since it is the “real line with endpoints fused together”.

Since ℝℙn is just “n (or Dn) with ℝℙn1 as its boundary”, we can construct ℝℙn as a CW complex inductively. Note that ℝℙn thus consists of one cell in each dimension.

Example 57.6.3 (ℝℙn as a cell complex)

(a)
ℝℙ0 is a single point.
(b)
ℝℙ1∼
=S1 is a circle, which as a CW complex is a 0-cell plus a 1-cell.
(c)
ℝℙ2 can be formed by taking a 2-cell and wrapping its perimeter twice around a copy of ℝℙ1.

57.6.iv  Complex projective space

The complex projective space ℂℙn is defined like ℝℙn with coordinates, i.e. 

(z0 : z1 : ⋅⋅⋅ : zn)

under scaling; this time zi are complex. As before, ℂℙn can be thought of as n augmented with some points at infinity (corresponding to ℂℙn1).

Example 57.6.4 (Complex projective space)

(a)
ℂℙ0 is a single point.
(b)
ℂℙ1 is plus a single point at infinity (“complex infinity” if you will). That means as before we can think of ℂℙ1 as
ℂ ℙ1 = ℂ ∪{∞  }.

So, imagine taking the complex plane and then adding a single point to encompass the entire boundary. The result is just sphere S2.

Here is a picture of ℂℙ1 with its coordinate system, the Riemann sphere.

PIC

Remark 57.6.5 (For Euclidean geometers) You may recognize that while ℝℙ2 is the setting for projective geometry, inversion about a circle is done in ℂℙ1 instead. When one does an inversion sending generalized circles to generalized circles, there is only one point at infinity: this is why we work in ℂℙn.

Like ℝℙn, ℂℙn is a CW complex, built inductively by taking n and welding its boundary onto ℂℙn1 The difference is that as topological spaces,

ℂn ∼= ℝ2n ∼= D2n.

Thus, we attach the cells D0, D2, D4 and so on inductively to construct ℂℙn. Thus we see that

ℂℙn consists of one cell in each even dimension.

57.7  A few harder problems to think about

Problem 57A. Show that a space X is Hausdorff if and only if the diagonal {(x,x)x X} is closed in the product space X × X.

Problem 57B. Realize the following spaces as CW complexes:

(a)
Möbius strip.
(b)
.
(c)
n.

Problem 57C. Show that a finite CW complex is compact.

58  Fundamental groups

Topologists can’t tell the difference between a coffee cup and a doughnut. So how do you tell anything apart?

This is a very hard question to answer, but one way we can try to answer it is to find some invariants of the space. To draw on the group analogy, two groups are clearly not isomorphic if, say, they have different orders, or if one is simple and the other isn’t, etc. We’d like to find some similar properties for topological spaces so that we can actually tell them apart.

Two such invariants for a space X are

Homology groups are hard to define, but in general easier to compute. Homotopy groups are easier to define but harder to compute.

This chapter is about the fundamental group π1.

58.1  Fusing paths together

Recall that a path in a space X is a function [0,1] X. Suppose we have paths γ1 and γ2 such that γ1(1) = γ2(0). We’d like to fuse1 them together to get a path γ1 γ2. Easy, right?

We unfortunately do have to hack the definition a tiny bit. In an ideal world, we’d have a path γ1 : [0,1] X and γ2 : [1,2] X and we could just merge them together to get γ1 γ2 : [0,2] X. But the “2” is wrong here. The solution is that we allocate [0,1
2] for the first path and [1
2,1] for the second path; we run “twice as fast”.

Definition 58.1.1. Given two paths γ12 : [0,1] X such that γ1(1) = γ2(0), we define a path γ1 γ2 : [0,1] X by

             {
               γ1(2t)      0 ≤ t ≤ 12
(γ1 ∗ γ2)(t) =  γ (2t − 1)  1 ≤ t ≤ 1.
                2         2

This hack unfortunately reveals a second shortcoming: this “product” is not associative. If we take (γ1 γ2) γ3 for some suitable paths, then [0,14], [14,12] and [12,1] are the times allocated for γ1, γ2, γ3.

Question 58.1.2. What are the times allocated for γ1 (γ2 γ3)?

But I hope you’ll agree that even though this operation isn’t associative, the reason it fails to be associative is kind of stupid. It’s just a matter of how fast we run in certain parts.

So as long as we’re fusing paths together, we probably don’t want to think of [0,1] itself too seriously. And so we only consider everything up to (path) homotopy equivalence. (Recall that two paths α and β are homotopic if there’s a path homotopy F : [0,1]2 X between them, which is a continuous deformation from α to β.) It is definitely true that

(γ  ∗γ )∗ γ ≃  γ ∗ (γ  ∗γ ).
  1   2    3    1   2   3

It is also true that if α1 α2 and β1 β2 then α1 β1 α2 β2.

Naturally, homotopy is an equivalence relation, so paths γ lives in some “homotopy type”, the equivalence classes under . We’ll denote this [γ]. Then it makes sense to talk about [α] [β]. Thus, we can think of as an operation on homotopy classes.

58.2  Fundamental groups

Prototypical example for this section: π1(2) is trivial and π1(S1)∼=.

At this point I’m a little annoyed at keeping track of endpoints, so now I’m going to specialize to a certain type of path.

Definition 58.2.1. A loop is a path with γ(0) = γ(1).

Hence if we restrict our attention to paths starting at a single point x0, then we can stop caring about endpoints and start-points, since everything starts and stops at x0. We even have a very canonical loop: the “do-nothing” loop2 given by standing at x0 the whole time.

Definition 58.2.2. Denote the trivial “do-nothing loop” by 1. A loop γ is nulhomotopic if it is homotopic to 1; i.e. γ 1.

For homotopy of loops, you might visualize “reeling in” the loop, contracting it to a single point.

Example 58.2.3 (Loops in S2 are nulhomotopic)
As the following picture should convince you, every loop in the simply connected space S2 is nulhomotopic.

PIC

(Starting with the purple loop, we contract to the red-brown point.)

Hence to show that spaces are simply connected it suffices to understand the loops of that space. We are now ready to provide:

Definition 58.2.4. The fundamental group of X with basepoint x0, denoted π1(X,x0), is the set of homotopy classes

{[γ] | γ a loop at x0}

equipped with as a group operation.

It might come as a surprise that this has a group structure. For example, what is the inverse? Let’s define it now.

Definition 58.2.5. Given a path α : [0,1] X we can define a path α

α-(t) = α(1 − t).

In effect, this “runs α backwards”. Note that α starts at the endpoint of α and ends at the starting point of α.

Exercise 58.2.6. Show that for any path α, α α is homotopic to the “do-nothing” loop at α(0). (Draw a picture.)

Let’s check it.

Proof that this is a group structure. Clearly takes two loops at x0 and spits out a loop at x0. We also already took the time to show that is associative. So we only have to check that (i) there’s an identity, and (ii) there’s an inverse.

Hence π1(X,x0) is actually a group. □

Before going any further I had better give some examples.

Example 58.2.7 (Examples of fundamental groups)
Note that proving the following results is not at all trivial. For now, just try to see intuitively why the claimed answer “should” be correct.

(a)
The fundamental group of is the trivial group: in the plane, every loop is nulhomotopic. (Proof: imagine it’s a piece of rope and reel it in.)
(b)
On the other hand, the fundamental group of −{0} (meteor example from earlier) with any base point is actually ! We won’t be able to prove this for a while, but essentially a loop is determined by the number of times that it winds around the origin – these are so-called winding numbers. Think about it!
(c)
Similarly, we will soon show that the fundamental group of S1 (the boundary of the unit circle) is .

Officially, I also have to tell you what the base point is, but by symmetry in these examples, it doesn’t matter.

Here is the picture for ∖{0}, with the hole exaggerated as the meteor from ?? .

Question 58.2.8. Convince yourself that the fundamental group of S1 is , and understand why we call these “winding numbers”. (This will be the most important example of a fundamental group in later chapters, so it’s crucial you figure it out now.)

Example 58.2.9 (The figure eight)
Consider a figure eight S1 S1, and let x0 be the center. Then

π (S1 ∨ S1,x ) ∼= ⟨a,b⟩
 1          0

is the free group generated on two letters. The idea is that one loop of the eight is a, and the other loop is b, so we expect π1 to be generated by this loop a and b (and its inverses a and b). These loops don’t talk to each other.

Recall that in graph theory, we usually assume our graphs are connected, since otherwise we can just consider every connected component separately. Likewise, we generally want to restrict our attention to path-connected spaces, since if a space isn’t path-connected then it can be broken into a bunch of “path-connected components”. (Can you guess how to define this?) Indeed, you could imagine a space X that consists of the objects on my desk (but not the desk itself): π1 of my phone has nothing to do with π1 of my mug. They are just totally disconnected, both figuratively and literally.

But on the other hand we claim that in a path-connected space, the groups are very related!

Theorem 58.2.10 (Fundamental groups don’t depend on basepoint)
Let X be a path-connected space. Then for any x1 X and x2 X, we have

          ∼
π1 (X, x1) = π1(X,x2).

Before you read the proof, see if you can guess the isomorphism based just on the picture below.

Proof. Let α be any path from x1 to x2 (possible by path-connectedness), and let α be its reverse. Then we can construct a map

                              --
π1(X, x1) → π1(X, x2) by [γ] ↦→ [α ∗ γ ∗ α].

In other words, given a loop γ at x1, we can start at x2, follow α to x1, run γ, then run along α home to x2. Hence this is a map which builds a loop of π1(X,x2) from every loop at π1(X,x1). It is a homomorphism of the groups just because

(α∗ γ1 ∗ α)∗ (α-∗γ2 ∗α) = α-∗γ1 ∗γ2 ∗α

as α α is nulhomotopic.

Similarly, there is a homomorphism

π1(X, x2) → π1(X, x1) by [γ] ↦→ [α ∗ γ ∗ α].

As these maps are mutual inverses, it follows they must be isomorphisms. End of story. □

This is a bigger reason why we usually only care about path-connected spaces.

Abuse of Notation 58.2.11. For a path-connected space X we will often abbreviate π1(X,x0) to just π1(X), since it doesn’t matter which x0 X we pick.

Finally, recall that we originally defined “simply connected” as saying that any two paths with matching endpoints were homotopic. It’s possible to weaken this condition and then rephrase it using fundamental groups.

Exercise 58.2.12. Let X be a path-connected space. Prove that X is simply connected if and only if π1(X) is the trivial group. (One direction is easy; the other is a little trickier.)

This is the “usual” definition of simply connected.

58.3  Fundamental groups are functorial

One quick shorthand I will introduce to clean up the discussion:

Definition 58.3.1. By f : (X,x0) (Y,y0), we will mean that f : X Y is a continuous function of spaces which also sends the point x0 to y0.

Let X and Y be topological spaces and f : (X,x0) (Y,y0). We now want to relate the fundamental groups of X and Y .

Recall that a loop γ in (X,x0) is a map γ : [0,1] X with γ(0) = γ(1) = x0. Then if we consider the composition

[0,1] γ−→ (X, x ) f−→ (Y,y )
            0        0

then we get straight-away a loop in Y at y0! Let’s call this loop fγ.

Lemma 58.3.2 (f is homotopy invariant)
If γ1 γ2 are path-homotopic, then in fact

f♯γ1 ≃ f♯γ2.

Proof. Just take the homotopy h taking γ1 to γ2 and consider f h. □

It’s worth noting at this point that if X and Y are homeomorphic, then their fundamental groups are all isomorphic. Indeed, let f : X Y and g : Y X be mutually inverse continuous maps. Then one can check that f : π1(X,x0) π1(Y,y0) and g : π1(Y,y0) π1(X,x0) are inverse maps between the groups (assuming f(x0) = y0 and g(y0) = x0).

58.4  Higher homotopy groups

Why the notation π1 for the fundamental group? And what are π2, …? The answer lies in the following rephrasing:

Question 58.4.1. Convince yourself that a loop is the same thing as a continuous function S1 X.

It turns out we can define homotopy for things other than paths. Two functions f,g : Y X are homotopic if there exists a continuous function Y × [0,1] X which continuously deforms f to g. So everything we did above was just the special case Y = S1.

For general n, the group πn(X) is defined as the homotopy classes of the maps Sn X. The group operation is a little harder to specify. You have to show that Sn is homeomorphic to [0,1]n with some endpoints fused together; for example S1 is [0,1] with 0 fused to 1. Once you have these cubes, you can merge them together on a face. (Again, I’m being terribly imprecise, deliberately.)

For n1, πn behaves somewhat differently than π1. (You might not be surprised, as Sn is simply connected for all n 2 but not when n = 1.) In particular, it turns out that πn(X) is an abelian group for all n 2.

Let’s see some examples.

Example 58.4.2 (πn(Sn)∼=)
As we saw, π1(S1)∼=; given the base circle S1, we can wrap a second circle around it as many times as we want. In general, it’s true that πn(Sn)∼
=.

Example 58.4.3 (πn(Sm)∼={1} when n < m)
We saw that π1(S2)∼={1}, because a circle in S2 can just be reeled in to a point. It turns out that similarly, any smaller n-dimensional sphere can be reeled in on the surface of a bigger m-dimensional sphere. So in general, πn(Sm) is trivial for n < m.

However, beyond these observations, the groups behave quite weirdly. Here is a table of πn(Sm) for 1 m 8 and 2 n 10, so you can see what I’m talking about. (Taken from Wikipedia.)

     m  |
-πn(S-)-|-2---3----4-----5------6-------7--------8----------9-----------10------
  m = 12 |{1ℤ} {1ℤ}  ℤ{1∕}2ℤ  ℤ{1∕}2ℤ  ℤ{∕11}2ℤ    {ℤ1∕}2ℤ     ℤ{1∕}2ℤ      ℤ{∕1}3ℤ        ℤ{∕11}5ℤ
      3 |     ℤ   ℤ∕2ℤ  ℤ∕2ℤ  ℤ∕12ℤ    ℤ∕2ℤ     ℤ∕2ℤ      ℤ∕3ℤ        ℤ∕15ℤ
      4 |          ℤ    ℤ∕2ℤ  ℤ∕2ℤ   ℤ× ℤ∕12ℤ  (ℤ ∕2ℤ)2  ℤ∕2ℤ× ℤ∕2ℤ  ℤ∕24ℤ× ℤ∕3ℤ
      5 |                ℤ    ℤ∕2ℤ     ℤ∕2ℤ    ℤ ∕24ℤ      ℤ∕2ℤ         ℤ∕2ℤ
      6 |                       ℤ      ℤ∕2ℤ     ℤ∕2ℤ      ℤ∕24ℤ         {1}
      7 |                               ℤ       ℤ∕2ℤ      ℤ∕2ℤ        ℤ∕24ℤ
      8                                          ℤ        ℤ∕2ℤ         ℤ∕2ℤ

Actually, it turns out that if you can compute πn(Sm) for every m and n, then you can essentially compute any homotopy classes. Thus, computing πn(Sm) is sort of a lost cause in general, and the mixture of chaos and pattern in the above table is a testament to this.

58.5  Homotopy equivalent spaces

Prototypical example for this section: A disk is homotopy equivalent to a point, an annulus is homotopy equivalent to S1.

Up to now I’ve abused notation and referred to “path homotopy” as just “homotopy” for two paths. I will unfortunately continue to do so (and so any time I say two paths are homotopic, you should assume I mean “path-homotopic”). But let me tell you what the general definition of homotopy is first.

Definition 58.5.1. Let f,g : X Y be continuous functions. A homotopy is a continuous function F : X × [0,1] Y , which we’ll write Fs(x) for s [0,1], x X, such that

F0(x) = f(x) and F1(x) = g (x ) for all x ∈ X.

If such a function exists, then f and g are homotopic.

Intuitively this is once again “deforming f to g”. You might notice this is almost exactly the same definition as path-homotopy, except that f and g are any functions instead of paths, and hence there’s no restriction on keeping some “endpoints” fixed through the deformation.

This homotopy can be quite dramatic:

Example 58.5.2
The zero function z↦→0 and the identity function z↦→z are homotopic as functions . The necessary deformation is

[0,1]× ℂ →  ℂ by (t,z ) ↦→ tz.

I bring this up because I want to define:

Definition 58.5.3. Let X and Y be continuous spaces. They are homotopy equivalent if there exist functions f : X Y and g : Y X such that

(i)
f g : X X is homotopic to the identity map on X, and
(ii)
g f : Y Y is homotopic to the identity map on Y .

If a topological space is homotopy equivalent to a point, then it is said to be contractible.

Question 58.5.4. Why are two homeomorphic spaces also homotopy equivalent?

Intuitively, you can think of this as a more generous form of stretching and bending than homeomorphism: we are allowed to compress huge spaces into single points.

Example 58.5.5 (is contractible)
Consider the topological spaces and the space consisting of the single point {0}. We claim these spaces are homotopy equivalent (can you guess what f and g are?) Indeed, the two things to check are

(i)
→{0}`→by z↦→0↦→0 is homotopy equivalent to the identity on , which we just saw, and
(ii)
{0}`→→{0} by 0↦→0↦→0, which is the identity on {0}.

Here by `→ I just mean in the special case that the function is just an “inclusion”.

Remark 58.5.6 — cannot be homeomorphic to a point because there is no bijection of sets between them.

Example 58.5.7 (∖{0} is homotopy equivalent to S1)
Consider the topological spaces ∖{0}, the punctured plane, and the circle S1 viewed as a subset of S1. We claim these spaces are actually homotopy equivalent! The necessary functions are the inclusion

S1 `→ ℂ ∖ {0}

and the function

ℂ ∖{0} → S1   by  z ↦→  z-.
                       |z|

You can check that these satisfy the required condition.

Remark 58.5.8 — On the other hand, ∖{0} cannot be homeomorphic to S1. One can make S1 disconnected by deleting two points; the same is not true for ∖{0}.

Example 58.5.9 (Disk = Point, Annulus = Circle.)
By the same token, a disk is homotopic to a point; an annulus is homotopic to a circle. (This might be a little easier to visualize, since it’s finite.)

I bring these up because it turns out that

Algebraic topology can’t distinguish between homotopy equivalent spaces.

More precisely,

Theorem 58.5.10 (Homotopy equivalent spaces have isomorphic fundamental groups)
Let X and Y be path-connected, homotopy-equivalent spaces. Then πn(X)∼=πn(Y ) for every positive integer n.

Proof. Let γ : [0,1] X be a loop. Let f : X Y and g : Y X be maps witnessing that X and Y are homotopy equivalent (meaning f g and g f are each homotopic to the identity). Then the composition

      γ    f
[0,1]−→  X −→ Y

is a loop in Y and hence f induces a natural homomorphism π1(X) π1(Y ). Similarly g induces a natural homomorphism π1(Y ) π1(X). The conditions on f and g now say exactly that these two homomorphisms are inverse to each other, meaning the maps are isomorphisms. □

In particular,

Question 58.5.11. What are the fundamental groups of contractible spaces?

That means, for example, that algebraic topology can’t tell the following homotopic subspaces of 2 apart.

58.6  The pointed homotopy category

This section is meant to be read by those who know some basic category theory. Those of you that don’t should come back after reading ?? . Those of you that do will enjoy how succinctly we can summarize the content of this chapter using categorical notions.

Definition 58.6.1. The pointed homotopy category hTop is defined as follows.

In particular, two path-connected spaces are isomorphic in this category exactly when they are homotopy equivalent. Then we can summarize many of the preceding results as follows:

Theorem 58.6.2 (Functorial interpretation of fundamental groups)
There is a functor

π1 : hTop∗ → Grp

sending (X,x0) ----π1 (X,|x0)
   |            |
   |            |
 f |          f♯ |

(Y,y0) ---- π1(Y,y0)

This implies several things, like

Remark 58.6.3 — In fact, π1(X,x0) is the set of arrows (S1,1) (X,x0) in hTop, so this is actually a covariant Yoneda functor (?? ), except with target Grp instead of Set.

58.7  A few harder problems to think about

Problem 58A (Harmonic fan). Exhibit a subspace X of the metric space 2 which is path-connected but for which a point p can be found such that any r-neighborhood of p with r < 1 is not path-connected.

Problem 58B (Special case of Seifert-van Kampen).       PICLet X be a topological space. Suppose U and V are connected open subsets of X, with X = U V , so that U V is nonempty and path-connected.

Prove that if π1(U) = π1(V ) = {1} then π1(X) = 1.

Remark 58.7.1 — The Seifert–van Kampen theorem generalizes this for π1(U) and π1(V ) any groups; it gives a formula for calculating π1(X) in terms of π1(U), π1(V ), π1(U V ). The proof is much the same.

Unfortunately, this does not give us a way to calculate π1(S1), because it is not possible to write S1 = U V for U V connected.

Problem 58C (RMM 2013).     PICPICLet n 2 be a positive integer. A stone is placed at each vertex of a regular 2n-gon. A move consists of selecting an edge of the 2n-gon and swapping the two stones at the endpoints of the edge. Prove that if a sequence of moves swaps every pair of stones exactly once, then there is some edge never used in any move.

(This last problem doesn’t technically have anything to do with the chapter, but the “gut feeling” which motivates the solution is very similar.)

59  Covering projections

A few chapters ago we talked about what a fundamental group was, but we didn’t actually show how to compute any of them except for the most trivial case of a simply connected space. In this chapter we’ll introduce the notion of a covering projection, which will let us see how some of these groups can be found.

59.1  Even coverings and covering projections

Prototypical example for this section: covers S1.

What we want now is a notion where a big space E, a “covering space”, can be projected down onto a base space B in a nice way. Here is the notion of “nice”:

Definition 59.1.1. Let p : E B be a continuous function. Let U be an open set of B. We call U evenly covered (by p) if ppre(U) is a disjoint union of open sets (possibly infinite) such that p restricted to any of these sets is a homeomorphism.

Picture:

PIC
Image from [?]

All we’re saying is that U is evenly covered if its pre-image is a bunch of copies of it. (Actually, a little more: each of the pancakes is homeomorphic to U, but we also require that p is the homeomorphism.)

Definition 59.1.2. A covering projection p : E B is a surjective continuous map such that every base point b B has an open neighborhood U b which is evenly covered by p.

Exercise 59.1.3 (On requiring surjectivity of p). Let p: E B be satisfying this definition, except that p need not be surjective. Show that the image of p is a connected component of B. Thus if B is connected and E is nonempty, then p: E B is already surjective. For this reason, some authors omit the surjectivity hypothesis as usually B is path-connected.

Here is the most stupid example of a covering projection.

Example 59.1.4 (Tautological covering projection)
Let’s take n disconnected copies of any space B: formally, E = B ×{1,,n} with the discrete topology on {1,,n}. Then there exists a tautological covering projection E B by (x,m)↦→x; we just project all n copies.

This is a covering projection because every open set in B is evenly covered.

This is not really that interesting because B × [n] is not path-connected.

A much more interesting example is that of and S1.

Example 59.1.5 (Covering projection of S1)
Take p : S1 by 𝜃↦→e2πi𝜃. This is essentially wrapping the real line into a single helix and projecting it down.

SVG-Viewer needed.

We claim this is a covering projection. Indeed, consider the point 1 S1 (where we view S1 as the unit circle in the complex plane). We can draw a small open neighborhood of it whose pre-image is a bunch of copies in .

Note that not all open neighborhoods work this time: notably, U = S1 does not work because the pre-image would be the entire .

Example 59.1.6 (Covering of S1 by itself)
The map S1 S1 by z↦→z3 is also a covering projection. Can you see why?

Example 59.1.7 (Covering projections of ∖{0})
For those comfortable with complex arithmetic,

(a)
The exponential map exp : ∖{0} is a covering projection.
(b)
For each n, the nth power map n : ∖{0}→ ∖{0} is a covering projection.

59.2  Lifting theorem

Prototypical example for this section: covers S1.

Now here’s the key idea: we are going to try to interpret loops in B as paths in . This is often much simpler. For example, we had no idea how to compute the fundamental group of S1, but the fundamental group of is just the trivial group. So if we can interpret loops in S1 as paths in , that might (and indeed it does!) make computing π1(S1) tractable.

Definition 59.2.1. Let γ : [0,1] B be a path and p : E B a covering projection. A lifting of γ is a path γ : [0,1] E such that p γ = γ.

Picture:                     E
                     |
                     |
                     p
                     |
        γ    ˜γ       |
[0,1 ]---------------B--

Example 59.2.2 (Typical example of lifting)
Take p : S1 by 𝜃↦→e2πi𝜃 (so S1 is considered again as the unit circle). Consider the path γ in S1 which starts at 1 and wraps around S1 once, counterclockwise, ending at 1 again. In symbols, γ : [0,1] S1 by t↦→e2πit.

Then one lifting γ is the path which walks from 0 to 1. In fact, for any integer n, walking from n to n + 1 works.

Similarly, the counterclockwise path from 1 S1 to 1 S1 has a lifting: for some integer n, the path from n to n + 12.

The above is the primary example of a lifting. It seems like we have the following structure: given a path γ in B starting at b0, we start at any point in the fiber ppre(b0). (In our prototypical example, B = S1, b0 = 1 and that’s why we start at any integer n.) After that we just trace along the path in B, and we get a corresponding path in E.

Question 59.2.3. Take a path γ in S1 with γ(0) = 1 . Convince yourself that once we select an integer n , then there is exactly one lifting starting at n.

It turns out this is true more generally.

Theorem 59.2.4 (Lifting paths)
Suppose γ : [0,1] B is a path with γ(0) = b0, and p : (E,e0) (B,b0) is a covering projection. Then there exists a unique lifting γ : [0,1] E such that γ(0) = e0.

Proof. For every point b B, consider an evenly covered open neighborhood Ub in B. Then the family of open sets

{ γpre(Ub) | b ∈ B }

is an open cover of [0,1]. As [0,1] is compact we can take a finite subcover. Thus we can chop [0,1] into finitely many disjoint closed intervals [0,1] = I1 I2 ⋅⋅⋅IN in that order, such that for every Ik, γimg(Ik) is contained in some Ub.

We’ll construct γ interval by interval now, starting at I1. Initially, place a robot at e0 E and a mouse at b0 B. For each interval Ik, the mouse moves around according to however γ behaves on Ik. But the whole time it’s in some evenly covered Uk; the fact that p is a covering projection tells us that there are several copies of Uk living in E. Exactly one of them, say V k, contains our robot. So the robot just mimics the mouse until it gets to the end of Ik. Then the mouse is in some new evenly covered Uk+1, and we can repeat. □

The theorem can be generalized to a diagram             (E,|e0)
               |
               |p
               |
            ˜
(Y,y0)---f-f(B,-b0)
where Y is some general path-connected space, as follows.

Theorem 59.2.5 (General lifting criterion)
Let f : (Y,y0) (B,b0) be continuous and consider a covering projection p : (E,e0) (B,b0). (As usual, Y , B, E are path-connected.) Then a lifting f with f(y0) = e0 exists if and only if

  img              img
f∗  (π1(Y,y0)) ⊆ p∗ (π1(E, e0)),

i.e. the image of π1(Y,y0) under f is contained in the image of π1(E,e0) under p (both viewed as subgroups of π1(B,b0)). If this lifting exists, it is unique.

As p is injective, we actually have pimg(π1(E,e0))∼=π1(E,e0). But in this case we are interested in the actual elements, not just the isomorphism classes of the groups.

Question 59.2.6. What happens if we put Y = [0,1]?

Remark 59.2.7 (Lifting homotopies) Here’s another cool special case: Recall that a homotopy can be encoded as a continuous function [0,1] × [0,1] X. But [0,1] × [0,1] is also simply connected. Hence given a homotopy γ1 γ2 in the base space B, we can lift it to get a homotopy γ1 γ2 in E.

Another nice application of this result is ?? .

59.3  Lifting correspondence

Prototypical example for this section: (,0) covers (S1,1).

Let’s return to the task of computing fundamental groups. Consider a covering projection p : (E,e0) (B,b0).

A loop γ can be lifted uniquely to γ in E which starts at e0 and ends at some point e in the fiber ppre(b0). You can easily check that this e E does not change if we pick a different path γhomotopic to γ.

Question 59.3.1. Look at the picture in ?? .

Put one finger at 1 S1, and one finger on 0 . Trace a loop homotopic to γ in S1 (meaning, you can go backwards and forwards but you must end with exactly one full counterclockwise rotation) and follow along with the other finger in .

Convince yourself that you have to end at the point 1 .

Thus every homotopy class of a loop at b0 (i.e. an element of π1(B,b0)) can be associated with some e in the fiber of b0. The below proposition summarizes this and more.

Proposition 59.3.2
Let p : (E,e0) (B,b0) be a covering projection. Then we have a function of sets

Φ : π1(B, b0) → ppre(b0)

by [γ]↦→γ(1), where γ is the unique lifting starting at e0. Furthermore,

Question 59.3.3. Prove that E path-connected implies Φ is surjective. (This is really offensively easy.)

Proof. To prove the proposition, we’ve done everything except show that E simply connected implies Φ injective. To do this suppose that γ1 and γ2 are loops such that Φ([γ1]) = Φ([γ2]).

Applying lifting, we get paths γ1 and γ2 both starting at some point e0 E and ending at some point e1 E. Since E is simply connected that means they are homotopic, and we can write a homotopy F : [0,1] × [0,1] E which unites them. But then consider the composition of maps

[0,1]×  [0,1]−F→ E  p−→ B.

You can check this is a homotopy from γ1 to γ2. Hence [γ1] = [γ2], done. □

This motivates:

Definition 59.3.4. A universal cover of a space B is a covering projection p : E B where E is simply connected (and in particular path-connected).

Abuse of Notation 59.3.5. When p is understood, we sometimes just say E is the universal cover.

Example 59.3.6 (Fundamental group of S1)
Let’s return to our standard p : S1. Since is simply connected, this is a universal cover of S1. And indeed, the fiber of any point in S1 is a copy of the integers: naturally in bijection with loops in S1.

You can show (and it’s intuitively obvious) that the bijection

       1
Φ : π1(S ) ↔ ℤ

is in fact a group homomorphism if we equip with its additive group structure . Since it’s a bijection, this leads us to conclude π1(S1)∼
=.

59.4  Regular coverings

Prototypical example for this section: S1 comes from n x = n + x

Here’s another way to generate some coverings. Let X be a topological space and G a group acting on its points. Thus for every g, we get a map X X by

x ↦→  g ⋅x.

We require that this map is continuous1 for every g G, and that the stabilizer of each point in X is trivial. Then we can consider a quotient space X∕G defined by fusing any points in the same orbit of this action. Thus the points of X∕G are identified with the orbits of the action. Then we get a natural “projection”

X →  X ∕G

by simply sending every point to the orbit it lives in.

Definition 59.4.1. Such a projection is called regular. (Terrible, I know.)

Example 59.4.2 (S1 is regular)
Let G = , X = and define the group action of G on X by

n ⋅x = n+ x

You can then think of X∕G as “real numbers modulo 1”, with [0,1) a complete set of representatives and 0 1.

So we can identify X∕G with S1 and the associated regular projection is just our usual exp : 𝜃↦→e2iπ𝜃.

Example 59.4.3 (The torus)
Let G = ×and X = 2, and define the group action of G on X by (m,n)(x,y) = (m+x,n+y). As [0,1)2 is a complete set of representatives, you can think of it as a unit square with the edges identified. We obtain the torus S1×S1 and a covering projection 2 S1 × S1.

Example 59.4.4 (ℝℙ2)
Let G = 2= ⟨          ⟩
 T  | T2 = 1 and let X = S2 be the surface of the sphere, viewed as a subset of 3. We’ll let G act on X by sending T x = x; hence the orbits are pairs of opposite points (e.g. North and South pole).

Let’s draw a picture of a space. All the orbits have size two: every point below the equator gets fused with a point above the equator. As for the points on the equator, we can take half of them; the other half gets fused with the corresponding antipodes.

Now if we flatten everything, you can think of the result as a disk with half its boundary: this is ℝℙ2 from before. The resulting space has a name: real projective 2-space, denoted ℝℙ2.

This gives us a covering projection S2 ℝℙ2 (note that the pre-image of a sufficiently small patch is just two copies of it on S2.)

Example 59.4.5 (Fundamental group of ℝℙ2)
As above, we saw that there was a covering projection S2 ℝℙ2. Moreover the fiber of any point has size two. Since S2 is simply connected, we have a natural bijection π1(ℝℙ2) to a set of size two; that is,

|       |
|π1(ℝℙ2)| = 2.

This can only occur if π1(ℝℙ2)∼=2, as there is only one group of order two!

Question 59.4.6. Show each of the continuous maps x↦→g x is in fact a homeomorphism. (Name its continuous inverse).

59.5  The algebra of fundamental groups

Prototypical example for this section: S1, with fundamental group .

Next up, we’re going to turn functions between spaces into homomorphisms of fundamental groups.

Let X and Y be topological spaces and f : (X,x0) (Y,y0). Recall that we defined a group homomorphism

f : π (X, x ) → π (Y ,y ) by  [γ] ↦→ [f ∘γ ].
 ♯  1     0     1  0  0

More importantly, we have:

Proposition 59.5.1
Let p : (E,e0) (B,b0) be a covering projection of path-connected spaces. Then the homomorphism p : π1(E,e0) π1(B,b0) is injective. Hence pimg(π1(E,e0)) is an isomorphic copy of π1(E,e0) as a subgroup of π1(B,b0).

Proof. We’ll show kerp is trivial. It suffices to show if γ is a nulhomotopic loop in B then its lift is nulhomotopic.

By definition, there’s a homotopy F : [0,1] × [0,1] B taking γ to the constant loop 1B. We can lift it to a homotopy F : [0,1] × [0,1] E that establishes γ 1B. But 1E is a lift of 1B (duh) and lifts are unique. □

Example 59.5.2 (Subgroups of )
Let’s look at the space S1 with fundamental group . The group has two types of subgroups:

It turns out that these are the only covering projections of Sn by path-connected spaces: there’s one for each subgroup of . (We don’t care about disconnected spaces because, again, a covering projection via disconnected spaces is just a bunch of unrelated “good” coverings.) For this statement to make sense I need to tell you what it means for two covering projections to be equivalent.

Definition 59.5.3. Fix a space B. Given two covering projections p1 : E1 B and p2 : E2 B a map of covering projections is a continuous function f : E1 E2 such that p2 f = p1.    ----f-------------
E1          p1     E2
                    |
                    p2
                    |
                    |
                   B
                                                                                
                                                                                
Then two covering projections p1 and p2 are isomorphic if there are f : E1 E2 and g : E2 E1 such that f g = idE1 and g f = idE2.

Remark 59.5.4 (For category theorists) The set of covering projections forms a category in this way.

It’s an absolute miracle that this is true more generally: the greatest triumph of covering spaces is the following result. Suppose a space X satisfies some nice conditions, like:

Definition 59.5.5. A space X is called locally connected if for each point x X and open neighborhood V of it, there is a connected open set U with x U V .

Definition 59.5.6. A space X is semi-locally simply connected if for every point x X there is an open neighborhood U such that all loops in U are nulhomotopic. (But the contraction need not take place in U.)

Example 59.5.7 (These conditions are weak)
Pretty much every space I’ve shown you has these two properties. In other words, they are rather mild conditions, and you can think of them as just saying “the space is not too pathological”.

Then we get:

Theorem 59.5.8 (Group theory via covering spaces)
Suppose B is a locally connected, semi-locally simply connected space. Then:

Hence it’s possible to understand the group theory of π1(B) completely in terms of the covering projections.

Moreover, this is how the “universal cover” gets its name: it is the one corresponding to the trivial subgroup of π1(B). Actually, you can show that it really is universal in the sense that if p : E B is another covering projection, then E is in turn covered by the universal space. More generally, if H1 H2 G are subgroups, then the space corresponding to H2 can be covered by the space corresponding to H1.

59.6  A few harder problems to think about

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

Part XVI
Category Theory

60  Objects and morphisms

I can’t possibly hope to do category theory any justice in these few chapters; thus I’ll just give a very high-level overview of how many of the concepts we’ve encountered so far can be re-cast into categorical terms. So I’ll say what a category is, give some examples, then talk about a few things that categories can do. For my examples, I’ll be drawing from all the previous chapters; feel free to skip over the examples corresponding to things you haven’t seen.

If you’re interested in category theory (like I was!), perhaps in what surprising results are true for general categories, I strongly recommend [?].

60.1  Motivation: isomorphisms

From earlier chapters let’s recall the definition of an isomorphism of two objects:

In each case we have some collections of objects and some maps, and the isomorphisms can be viewed as just maps. Let’s use this to motivate the definition of a general category.

60.2  Categories, and examples thereof

Prototypical example for this section: Grp is possibly the most natural example.

Definition 60.2.1. A category 𝒜 consists of:

Abuse of Notation 60.2.2. From now on, by A ∈𝒜 we’ll mean A obj(𝒜).

Abuse of Notation 60.2.3. You can think of “class” as just “set”. The reason we can’t use the word “set” is because of some paradoxical issues with collections which are too large; Cantor’s Paradox says there is no set of all sets. So referring to these by “class” is a way of sidestepping these issues.

Now and forever I’ll be sloppy and assume all my categories are locally small, meaning that Hom𝒜(A1,A2) is a set for any A1,A2 ∈𝒜. So elements of 𝒜 may not form a set, but the set of morphisms between two given objects will always assumed to be a set.

Let’s formalize the motivation we began with.

Example 60.2.4 (Basic examples of categories)

(a)
There is a category of groups Grp. The data is
  • The objects of Grp are the groups.
  • The arrows of Grp are the homomorphisms between these groups.
  • The composition in Grp is function composition.
(b)
In the same way we can conceive a category CRing of (commutative) rings.
(c)
Similarly, there is a category Top of topological spaces, whose arrows are the continuous maps.
(d)
There is a category Top of topological spaces with a distinguished basepoint; that is, a pair (X,x0) where x0 X. Arrows are continuous maps f : X Y with f(x0) = y0.
(e)
Similarly, there is a category Vectk of vector spaces (possibly infinite-dimensional) over a field k, whose arrows are the linear maps. There is even a category FDVectk of finite-dimensional vector spaces.
(f)
We have a category Set of sets, where the arrows are any maps.

And of course, we can now define what an isomorphism is!

Definition 60.2.5. An arrow A1f−→A2 is an isomorphism if there exists A2g−→A1 such that f g = idA2 and g f = idA1. In that case we say A1 and A2 are isomorphic, hence A1∼
=A2.

Remark 60.2.6 — Note that in Set, X∼=Y |X | = |Y |.

Question 60.2.7. Check that every object in a category is isomorphic to itself. (This is offensively easy.)

More importantly, this definition should strike you as a little impressive. We’re able to define whether two groups (rings, spaces, etc.) are isomorphic solely by the functions between the objects. Indeed, one of the key themes in category theory (and even algebra) is that

One can learn about objects by the functions between them. Category theory takes this to the extreme by only looking at arrows, and ignoring what the objects themselves are.

But there are some trickier interesting examples of categories.

Example 60.2.8 (Posets are categories)
Let 𝒫 be a partially ordered set. We can construct a category P for it as follows:

For example, for the poset 𝒫 on four objects {a,b,c,d} with a b and a c d, we get:

SVG-Viewer needed.

This illustrates the point that

The arrows of a category can be totally different from functions.

In fact, in a way that can be made precise, the term “concrete category” refers to one where the arrows really are “structure-preserving maps between sets”, like Grp, Top, or CRing.

Question 60.2.9. Check that no two distinct objects of a poset are isomorphic.

Here’s a second quite important example of a non-concrete category.

Example 60.2.10 (Important: groups are one-Object categories)
A group G can be interpreted as a category 𝒢 with one object , all of whose arrows are isomorphisms.

SVG-Viewer needed.

As [?] says:

The first time you meet the idea that a group is a kind of category, it’s tempting to dismiss it as a coincidence or a trick. It’s not: there’s real content. To see this, suppose your education had been shuffled and you took a course on category theory before ever learning what a group was. Someone comes to you and says:

“There are these structures called ‘groups’, and the idea is this: a group is what you get when you collect together all the symmetries of a given thing.”

“What do you mean by a ‘symmetry’?” you ask.

“Well, a symmetry of an object X is a way of transforming X or mapping X into itself, in an invertible way.”

“Oh,” you reply, “that’s a special case of an idea I’ve met before. A category is the structure formed by lots of objects and mappings between them – not necessarily invertible. A group’s just the very special case where you’ve only got one object, and all the maps happen to be invertible.”

Exercise 60.2.11. Verify the above! That is, show that the data of a one-object category with all isomorphisms is the same as the data of a group.

Finally, here are some examples of categories you can make from other categories.

Example 60.2.12 (Deriving categories)

(a)
Given a category 𝒜, we can construct the opposite category 𝒜op, which is the same as 𝒜 but with all arrows reversed.
(b)
Given categories 𝒜 and , we can construct the product category 𝒜×ℬ as follows: the objects are pairs (A,B) for A ∈𝒜 and B ∈ℬ, and the arrows from (A1,B1) to (A2,B2) are pairs
(                  )
     f        g
 A1  −→ A2, B1 −→ B2   .

What do you think the composition and identities are?

60.3  Special objects in categories

Prototypical example for this section: Set has initial object and final object {∗}. An element of S corresponds to a map {∗}→ S.

Certain objects in categories have special properties. Here are a couple examples.

Example 60.3.1 (Initial object)
An initial object of 𝒜 is an object Ainit ∈𝒜 such that for any A ∈𝒜 (possibly A = Ainit), there is exactly one arrow from Ainit to A. For example,

(a)
The initial object of Set is the empty set .
(b)
The initial object of Grp is the trivial group {1}.
(c)
The initial object of CRing is the ring (recall that ring homomorphisms R S map 1R to 1S).
(d)
The initial object of Top is the empty space.
(e)
The initial object of a partially ordered set is its smallest element, if one exists.

We will usually refer to “the” initial object of a category, since:

Exercise 60.3.2 (Important!). Show that any two initial objects A1, A2 of 𝒜 are uniquely isomorphic meaning there is a unique isomorphism between them.

Remark 60.3.3 — In mathematics, we usually neither know nor care if two objects are actually equal or whether they are isomorphic. For example, there are many competing ways to define , but we still just refer to it as “the” real numbers.

Thus when we define categorical notions, we would like to check they are unique up to isomorphism. This is really clean in the language of categories, and definitions often cause objects to be unique up to isomorphism for elegant reasons like the above.

One can take the “dual” notion, a terminal object.

Example 60.3.4 (Terminal object)
A terminal object of 𝒜 is an object Afinal ∈𝒜 such that for any A ∈𝒜 (possibly A = Afinal), there is exactly one arrow from A to Afinal. For example,

(a)
The terminal object of Set is the singleton set {∗}. (There are many singleton sets, of course, but as sets they are all isomorphic!)
(b)
The terminal object of Grp is the trivial group {1}.
(c)
The terminal object of CRing is the zero ring 0. (Recall that ring homomorphisms R S must map 1R to 1S).
(d)
The terminal object of Top is the single-point space.
(e)
The terminal object of a partially ordered set is its maximal element, if one exists.

Again, terminal objects are unique up to isomorphism. The reader is invited to repeat the proof from the preceding exercise here. However, we can illustrate more strongly the notion of duality to give a short proof.

Question 60.3.5. Verify that terminal objects of 𝒜 are equivalent to initial objects of 𝒜op. Thus terminal objects of 𝒜 are unique up to isomorphism.

In general, one can consider in this way the dual of any categorical notion: properties of 𝒜 can all be translated to dual properties of 𝒜op (often by adding the prefix “co” in front).

One last neat construction: suppose we’re working in a concrete category, meaning (loosely) that the objects are “sets with additional structure”. Now suppose you’re sick of maps and just want to think about elements of these sets. Well, I won’t let you do that since you’re reading a category theory chapter, but I will offer you some advice:

and so on. So in most concrete categories, you can think of elements as functions from special sets to the set in question. In each of these cases we call the object in question a free object.

60.4  Binary products

Prototypical example for this section: X × Y in most concrete categories is the set-theoretic product.

The “universal property” is a way of describing objects in terms of maps in such a way that it defines the object up to unique isomorphism (much the same as the initial and terminal objects).

To show how this works in general, let me give a concrete example. Suppose I’m in a category – let’s say Set for now. I have two sets X and Y , and I want to construct the Cartesian product X × Y as we know it. The philosophy of category theory dictates that I should talk about maps only, and avoid referring to anything about the sets themselves. How might I do this?

Well, let’s think about maps into X × Y . The key observation is that

A function Af
−→X × Y amounts to a pair of functions (Ag
−→X,A h
−→Y ).

Put another way, there is a natural projection map X × Y X and X × Y Y :                X

X × Y----πX---
         πY

               Y
(We have to do this in terms of projection maps rather than elements, because category theory forces us to talk about arrows.) Now how do I add A to this diagram? The point is that there is a bijection between functions A f
−→X × Y and pairs (g,h) of functions. Thus for every pair Ag
−→X and A h
−→Y there is a unique function A f
−→X × Y .

But X ×Y is special in that it is “universal”: for any other set A, if you give me functions A X and A Y , I can use it build a unique function A X × Y . Picture:                          X

             g     πY
A ........∃!f.-X--×h-Y----πX----


                         Y
We can do this in any general category, defining a so-called product.

Definition 60.4.1. Let X and Y be objects in any category 𝒜. The product consists of an object X ×Y and arrows πX, πY to X and Y (thought of as projection). We require that for any object A and arrows Ag
−→X, Ah
−→Y , there is a unique function Af
−→X × Y such that the diagram                          X

A ........∃!f.-X--g×-Y---πY----
             h      πX

                         Y
commutes.

Abuse of Notation 60.4.2. Strictly speaking, the product should consist of both the object X×Y and the projection maps πX and πY . However, if πX and πY are understood, then we often use X × Y to refer to the object, and refer to it also as the product.

Products do not always exist; for example, take a category with just two objects and no non-identity morphisms. Nonetheless:

Proposition 60.4.3 (Uniqueness of products)
When they exist, products are unique up to isomorphism: given two products P1 and P2 of X and Y there is an isomorphism between the two objects.

Proof. This is very similar to the proof that initial objects are unique up to unique isomorphism. Consider two such objects P1 and P2, and the associated projection maps. So, we have a diagram                       1
             X   ----πX-------
              |
              |
              π2X
              |
   ------π1X-------- ----
P1     f π1Y P2|    g     P1
              |
              π2
              |Y

             Y   -----1-------
                     πY
There are unique morphisms f and g between P1 and P2 that make the entire diagram commute, according to the universal property.

On the other hand, look at g f and focus on just the outer square. Observe that g f is a map which makes the outer square commute, so by the universal property of P1 it is the only one. But idP1 works as well. Thus idP1 = g f. Similarly, f g = idP2 so f and g are isomorphisms. □

Abuse of Notation 60.4.4. Actually, this is not really the morally correct theorem; since we’ve only showed the objects P1 and P2 are isomorphic and have not made any assertion about the projection maps. But I haven’t (and won’t) define isomorphism of the entire product, and so in what follows if I say “P1 and P2 are isomorphic” I really just mean the objects are isomorphic.

Exercise 60.4.5. In fact, show the products are unique up to unique isomorphism: the f and g above are the only isomorphisms between the objects P1 and P2.

The nice fact about this “universal property” mindset is that we don’t have to give explicit constructions; assuming existence, the “universal property” allows us to bypass all this work by saying “the object with these properties is unique up to unique isomorphism”, thus we don’t need to understand the internal workings of the object to use its properties.

Of course, that’s not to say we can’t give concrete examples.

Example 60.4.6 (Examples of products)

(a)
In Set, the product of two sets X and Y is their Cartesian product X × Y .
(b)
In Grp, the product of G, H is the group product G × H.
(c)
In Vectk, the product of V and W is V W.
(d)
In CRing, the product of R and S is appropriately the ring product R × S.
(e)
Let 𝒫 be a poset interpreted as a category. Then the product of two objects x and y is the greatest lower bound; for example,
  • If the poset is (,) then it’s min{x,y}.
  • If the poset is the subsets of a finite set by inclusion, then it’s x y.
  • If the poset is the positive integers ordered by division, then it’s gcd(x,y).

Of course, we can define products of more than just one object. Consider a set of objects (Xi)iI in a category 𝒜. We define a cone on the Xi to be an object A with some “projection” maps to each Xi. Then the product is a cone P which is “universal” in the same sense as before: given any other cone A there is a unique map A P making the diagram commute. In short, a product is a “universal cone”.

The picture of this is                ------------------------
             A|
              |
             !∃f
              |

             P -------------




X    -X------------X-----X-------
  1     2           3      4
See also ?? .

One can also do the dual construction to get a coproduct: given X and Y , it’s the object X + Y together with maps X−ιX→X + Y and Y ι−Y→X + Y (that’s Greek iota, think inclusion) such that for any object A and maps Xg
−→A, Y h
−→A there is a unique f for which             g
X -----ιX--------------




    X +  Y -!∃f--- A




   ----ιY-------------
Y           h
commutes. We’ll leave some of the concrete examples as an exercise this time, for example:

Exercise 60.4.7. Describe the coproduct in Set.

Predictable terminology: a coproduct is a universal cocone.

Spoiler alert later on: this construction can be generalized vastly to so-called “limits”, and we’ll do so later on.

60.5  Monic and epic maps

The notion of “injective” doesn’t make sense in an arbitrary category since arrows need not be functions. The correct categorical notion is:

Definition 60.5.1. A map Xf
−→Y is monic (or a monomorphism) if for any commutative diagram        g
A --------- X  --f-  Y
      h
we must have g = h. In other words, f g = f h =⇒ g = h.

Question 60.5.2. Verify that in a concrete category, injective =⇒ monic.

Question 60.5.3. Show that the composition of two monic maps is monic.

In most but not all situations, the converse is also true.

Exercise 60.5.4. Show that in Set, Grp, CRing, monic implies injective. (Take A = {∗}, A = , A = [x].)

More generally, as we said before there are many categories with a “free” object that you can use to think of as elements. An element of a set is a function 1 S, and element of a ring is a function [x] R, et cetera. In all these categories, the definition of monic literally reads “f is injective on Hom𝒜(A,X)”. So in these categories, “monic” and “injective” coincide.

That said, here is the standard counterexample. An additive abelian group G = (G,+) is called divisible if for every x G and n there exists y G with ny = x. Let DivAbGrp be the category of such groups.

Exercise 60.5.5. Show that the projection is monic but not injective.

Of course, we can also take the dual notion.

Definition 60.5.6. A map Xf−→Y is epic (or an epimorphism) if for any commutative diagram                    g
X  ----f---- Y ----------A
                   h
we must have g = h. In other words, g f = h f =⇒ g = h.

This is kind of like surjectivity, although it’s a little farther than last time. Note that in concrete categories, surjective =⇒ epic.

Exercise 60.5.7. Show that in Set, Grp, Ab, Vectk, Top, the notions of epic and surjective coincide. (For Set, take A = {0,1}.)

However, there are more cases where it fails. Most notably:

Example 60.5.8 (Epic but not surjective)

(a)
In CRing, for instance, the inclusion `→is epic (and not surjective).. Indeed, if two homomorphisms A agree on every integer then they agree everywhere (why?),
(b)
In the category of Hausdorff topological spaces (every two points have disjoint open neighborhoods), in fact epic dense image (like `→).

Thus failures arise when a function f : X Y can be determined by just some of the points of X.

60.6  A few harder problems to think about

Problem 60A. In the category Vectk of k-vector spaces (for a field k), what are the initial and terminal objects?

Problem 60B. What is the coproduct X+Y in the categories Set, Vectk, and a poset?

Problem 60C. In any category 𝒜 where all products exist, show that

(X  × Y) × Z ∼ X × (Y ×  Z)
             =

where X, Y , Z are arbitrary objects. (Here both sides refer to the objects, as in ?? .)

Problem 60D.       PICConsider a category 𝒜 with a zero object, meaning an object which is both initial and terminal. Given objects X and Y in A, prove that the projection X × Y X is epic.

61  Functors and natural transformations

Functors are maps between categories; natural transformations are maps between functors.

61.1  Many examples of functors

Prototypical example for this section: Forgetful functors; fundamental groups; .

Here’s the point of a functor:

Pretty much any time you make an object out of another object, you get a functor.

Before I give you a formal definition, let me list (informally) some examples. (You’ll notice some of them have opposite categories 𝒜op appearing in places. Don’t worry about those for now; you’ll see why in a moment.)

61.2  Covariant functors

Prototypical example for this section: Forgetful/free functors,

Category theorists are always asking “what are the maps?”, and so we can now think about maps between categories.

Definition 61.2.1. Let 𝒜 and be categories. Of course, a functor F takes every object of 𝒜 to an object of . In addition, though, it must take every arrow A1 f
−→A2 to an arrow F(A1)F−−(f−)→F(A2). You can picture this as follows.        A           B   = F (A  )
        1           |1       1
        |     F     |
𝒜 ∋   f |........................|F(f)      ∈ ℬ
        |           |
        |           |
       A2          B2  = F (A2 )
(I’ll try to use dotted arrows for functors, which cross different categories, for emphasis.) It needs to satisfy the “naturality” requirements:

So the idea is:

Whenever we naturally make an object A ∈𝒜 into an object of B ∈ℬ, there should usually be a natural way to transform a map A1 A2 into a map B1 B2.

Let’s see some examples of this.

Example 61.2.2 (Free and forgetful functors)
Note that these are both informal terms, and don’t have a rigid definition.

(a)
We talked about a forgetful functor earlier, which takes the underlying set of a category like Vectk. Let’s call it U : Vectk Set.

Now, given a map T : V 1 V 2 in Vectk, there is an obvious U(T) : U(V 1) U(V 2) which is just the set-theoretic map corresponding to T.

Similarly there are forgetful functors from Grp, CRing, etc., to Set. There is even a forgetful functor CRing Grp: send a ring R to the abelian group (R,+). The common theme is that we are “forgetting” structure from the original category.

(b)
We also talked about a free functor in the example. A free functor F : Set Vectk can be taken by considering F(S) to be the vector space with basis S. Now, given a map f : S T, what is the obvious map F(S) F(T)? Simple: take each basis element s S to the basis element f(s) T.

Similarly, we can define F : Set Grp by taking the free group generated by a set S.

Remark 61.2.3 — There is also a notion of “injective” and “surjective” for functors (on arrows) as follows. A functor F : 𝒜→ℬ is faithful (resp. full) if for any A1,A2, F : Hom𝒜(A1,A2) Hom(FA1,FA2) is injective (resp. surjective).a

We can use this to give an exact definition of concrete category: it’s a category with a faithful (forgetful) functor U : 𝒜→Set.

Example 61.2.4 (Functors from 𝒢)
Let G be a group and 𝒢 = {∗} be the associated one-object category.

(a)
Consider a functor F : 𝒢→Set, and let S = F(). Then the data of F corresponds to putting a group action of G on S.
(b)
Consider a functor F : 𝒢→FDVectk, and let V = F() have dimension n. Then the data of F corresponds to embedding G as a subgroup of the n×n matrices (i.e. the linear maps V V ). This is one way groups historically arose; the theory of viewing groups as matrices forms the field of representation theory.
(c)
Let H be a group and construct the same way. Then functors 𝒢→ℋ correspond to homomorphisms G H.

Exercise 61.2.5. Check the above group-based functors work as advertised.

Here’s a more involved example. If you find it confusing, skip it and come back after reading about its contravariant version.

Example 61.2.6 (Covariant Yoneda functor)
Fix an A ∈𝒜. For a category 𝒜, define the covariant Yoneda functor HA: 𝒜→Set by defining

  A
H  (A1 ) := Hom 𝒜 (A, A1) ∈ Set.

Hence each A1 is sent to the arrows from A to A1; so HA describes how A sees the world.

Now we want to specify how HA behaves on arrows. For each arrow A1f−→A2, we need to specify Set-map Hom𝒜(A,A1) Hom(A,A2); in other words, we need to send an arrow Ap
−→A1 to an arrow A A2. There’s only one reasonable way to do this: take the composition

  p     f
A −→  A1 −→ A2.

In other words, HA(f) is p↦→f p. In still other words, HA(f) = f ∘−; the is a slot for the input to go into.

As another example:

Question 61.2.7. If 𝒫 and 𝒬 are posets interpreted as categories, what does a functor from 𝒫 to 𝒬 represent?

Now, let me explain why we might care. Consider the following “obvious” fact: if G and H are isomorphic groups, then they have the same size. We can formalize it by saying: if G∼=H in Grp and U : Grp Set is the forgetful functor (mapping each group to its underlying set), then U(G)∼=U(H). The beauty of category theory shows itself: this in fact works for any functors and categories, and the proof is done solely through arrows:

Theorem 61.2.8 (Functors preserve isomorphism)
If A1∼
=A2 are isomorphic objects in 𝒜 and F : 𝒜→ℬ is a functor then F(A1)∼=F(A2).

Proof. Try it yourself! The picture is:        A1|         B1 |  = F (A1 )
       | |          | |
𝒜 ∋  f | |g...F...  F(f) | F(g)       ∈ ℬ
       | |          | |
       | |          | |
       A2          B2    = F (A2 )
You’ll need to use both key properties of functors: they preserve composition and the identity map. □

This will give us a great intuition in the future, because

(i)
Almost every operation we do in our lifetime will be a functor, and
(ii)
We now know that functors take isomorphic objects to isomorphic objects.

Thus, we now automatically know that basically any “reasonable” operation we do will preserve isomorphism (where “reasonable” means that it’s a functor). This is super convenient in algebraic topology, for example; see ?? , where we get for free that homotopic spaces have isomorphic fundamental groups.

Remark 61.2.9 — This lets us construct a category Cat whose objects are categories and arrows are functors.

61.3  Contravariant functors

Prototypical example for this section: Dual spaces, contravariant Yoneda functor, etc.

Now I have to explain what the opposite categories were doing earlier. In all the previous examples, we took an arrow A1 A2, and it became an arrow F(A1) F(A2). Sometimes, however, the arrow in fact goes the other way: we get an arrow F(A2) F(A1) instead. In other words, instead of just getting a functor 𝒜→ℬ we ended up with a functor 𝒜op →ℬ.

These functors have a name:

Definition 61.3.1. A contravariant functor from 𝒜 to is a functor F : 𝒜op →ℬ. (Note that we do not write “contravariant functor F : 𝒜 → ℬ”, since that would be confusing; the function notation will always use the correct domain and codomain.)

Pictorially:        A1          B1  = F (A1 )
        |           |
        |     F     |
𝒜 ∋   f |........................|F(f)      ∈ ℬ
        |           |
        |           |
       A2          B2  = F (A2 )
For emphasis, a usual functor is often called a covariant functor. (The word “functor” with no adjective always refers to covariant.)

Let’s see why this might happen.

Example 61.3.2 (V ↦→V is contravariant)
Consider the functor Vectk Vectk by V ↦→V .

If we were trying to specify a covariant functor, we would need, for every linear map T : V 1 V 2, a linear map T : V 1 V 2. But recall that V 1 = Hom(V 1,k) and V 2 = Hom(V 2,k): there’s no easy way to get an obvious map from left to right.

However, there is an obvious map from right to left: given ξ2 : V 2 k, we can easily give a map from V 1 k: just compose with T! In other words, there is a very natural map V 2V 1 according to the composition V1 ----T----V2 ----ξ2--- k
In summary, a map T : V 1 V 2 induces naturally a map T : V 2V 1 in the opposite direction. So the contravariant functor looks like:                ∨
 V1          V1
  |     ∨     |
T |............−............|T∨
  |           |
  |
 V2          V2∨

We can generalize the example above in any category by replacing the field k with any chosen object A ∈𝒜.

Example 61.3.3 (Contravariant Yoneda functor)
The contravariant Yoneda functor on 𝒜, denoted HA : 𝒜op Set, is used to describe how objects of 𝒜 see A. For each X ∈𝒜 it puts

HA (X ) := Hom 𝒜(X, A ) ∈ Set.

For Xf−→Y in 𝒜, the map HA(f) sends each arrow Y p−→A Hom𝒜(Y,A) to

X −→f Y −→p A   ∈ Hom 𝒜 (X,A )

as we did above. Thus HA(f) is an arrow from Hom𝒜(Y,A) Hom𝒜(X,A). (Note the flipping!)

Exercise 61.3.4. Check now the claim that 𝒜op ×𝒜→Set by (A1,A2)↦→Hom(A1,A2) is in fact a functor.

61.4  Equivalence of categories

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

61.5  (Optional) Natural transformations

We made categories to keep track of objects and maps, then went a little crazy and asked “what are the maps between categories?” to get functors. Now we’ll ask “what are the maps between functors?” to get natural transformations.

It might sound terrifying that we’re drawing arrows between functors, but this is actually an old idea. Recall that given two paths α,β : [0,1] X, we built a path-homotopy by “continuously deforming” the path α to β; this could be viewed as a function [0,1] × [0,1] X. The definition of a natural transformation is similar: we want to pull F to G along a series of arrows in the target space .

Definition 61.5.1. Let F,G : 𝒜→ℬ be two functors. A natural transformation α from F to G, denoted

      F
𝒜       α   ℬ
      G

consists of, for each A ∈𝒜 an arrow αA Hom(F(A),G(A)), which is called the component of α at A. Pictorially, it looks like this:                 F (A) ∈ ℬ
                    |
              FG     |
𝒜 ∋    A  ........................................ |αA
                    |

                G (A) ∈ ℬ
These αA are subject to the “naturality” requirement that for any A1f−→A2, the diagram       -F-(f)
F (A1 )      F (A2)
   |           |
   α           |α
   |A1         | A2

G (A1 )----- G (A2)
       G (f)
commutes.

The arrow αA represents the path that F(A) takes to get to G(A) (just as in a path-homotopy from α to β each point α(t) gets deformed to the point β(t) continuously). A picture might help: consider

Here 𝒜 is the small category with three elements and two non-identity arrows f, g (I’ve omitted the identity arrows for simplicity). The images of 𝒜 under F and G are the blue and green “subcategories” of . Note that could potentially have many more objects and arrows in it (grey). The natural transformation α (red) selects an arrow of from each F(A) to the corresponding G(A), dragging the entire image of F to the image of G. Finally, we require that any diagram formed by the blue, red, and green arrows is commutative (naturality), so the natural transformation is really “natural”.

There is a second equivalent definition that looks much more like the homotopy.

Definition 61.5.2. Let 2 denote the category generated by a poset with two elements 0 1, that is,

SVG-Viewer needed.

Then a natural transformation        F
𝒜       α   ℬ

       G is just a functor α : 𝒜×2 →ℬ satisfying

α(A, 0) = F (A), α (f,0 ) = F (f) and α(A, 1) = G (A), α (f, 1) = G (f).

More succinctly, α(,0) = F, α(,1) = G.

The proof that these are equivalent is left as a practice problem.

Naturally, two natural transformations α : F G and β : G H can get composed.              F (A )
               |
               |
               |αA
               |
        G  F
𝒜 ∋ A.....................................................G.........(A.........)..
           H   |
               |
               |βA

             H (A )

Now suppose α is a natural transformation such that αA is an isomorphism for each A. In this way, we can construct an inverse arrow βA to it.            F (A ) ∈ ℬ
              | |
         FG    | |
𝒜 ∋ A.................................α...A.| |βA
              | |
           G (A ) ∈ ℬ
In this case, we say α is a natural isomorphism. We can then say that F(A)∼=G(A) naturally in A. (And β is an isomorphism too!) This means that the functors F and G are “really the same”: not only are they isomorphic on the level of objects, but these isomorphisms are “natural”. As a result of this, we also write F∼=G to mean that the functors are naturally isomorphic.

This is what it really means when we say that “there is a natural / canonical isomorphism”. For example, I claimed earlier (in ?? ) that there was a canonical isomorphism (V )∼
=V , and mumbled something about “not having to pick a basis” and “God-given”. Category theory, amazingly, lets us formalize this: it just says that (V )∼=id(V ) naturally in V FDVectk. Really, we have a natural transformation

            id
FDVectk      𝜀    FDVectk .
          (−∨)∨

where the component 𝜀V is given by v↦→ev v (as discussed earlier, the fact that it is an isomorphism follows from the fact that V and (V ) have equal dimensions and 𝜀V is injective).

61.6  (Optional) The Yoneda lemma

Now that I have natural transformations, I can define:

Definition 61.6.1. The functor category of two categories 𝒜 and , denoted [𝒜,], is defined as follows:

Question 61.6.2. When are two objects in the functor category isomorphic?

With this, I can make good on the last example I mentioned at the beginning:

Exercise 61.6.3. Construct the following functors:

Notice that we have opposite categories either way; even if you like HA because it is covariant, the map H is contravariant. So for what follows, we’ll prefer to use H.

The main observation now is that given a category 𝒜, H provides some special functors 𝒜op Set which are already “built” in to the category A. In light of this, we define:

Definition 61.6.4. A presheaf X is just a contravariant functor 𝒜op Set. It is called representable if X∼=HA for some A.

In other words, when we think about representable, the question we’re asking is:

What kind of presheaves are already “built in” to the category 𝒜?

One way to get at this question is: given a presheaf X and a particular HA, we can look at the set of natural transformations α : X = ⇒ HA, and see if we can learn anything about it. In fact, this set can be written explicitly:

Theorem 61.6.5 (Yoneda lemma)
Let 𝒜 be a category, pick A ∈𝒜, and let HA be the contravariant Yoneda functor. Let X : 𝒜op Set be a contravariant functor. Then the map

({                                HA        )}
  Natural transformations   op      α         →  X (A)
(                         𝒜            Set )
                                 X

defined by α↦→αA(idA) X(A) is an isomorphism of Set (i.e. a bijection). Moreover, if we view both sides of the equality as functors

𝒜op × [𝒜op,Set] → Set

then this isomorphism is natural.

This might be startling at first sight. Here’s an unsatisfying explanation why this might not be too crazy: in category theory, a rule of thumb is that “two objects of the same type that are built naturally are probably the same”. You can see this theme when we defined functors and natural transformations, and even just compositions. Now to look at the set of natural transformations, we took a pair of elements A ∈𝒜 and X [𝒜op,Set] and constructed a set of natural transformations. Is there another way we can get a set from these two pieces of information? Yes: just look at X(A). The Yoneda lemma is telling us that our heuristic still holds true here.

Some consequences of the Yoneda lemma are recorded in [?]. Since this chapter is already a bit too long, I’ll just write down the statements, and refer you to [?] for the proofs.

1.
As we mentioned before, H provides a functor
𝒜 →  [𝒜op, Set].

It turns out this functor is in fact fully faithful; it quite literally embeds the category 𝒜 into the functor category on the right (much like Cayley’s theorem embeds every group into a permutation group).

2.
If X,Y ∈𝒜 then
HX  ∼= HY  ⇐ ⇒  X  ∼= Y  ⇐⇒   HX  ∼= HY .

To see why this is expected, consider 𝒜 = Grp for concreteness. Suppose A, X, Y are groups such that HX(A)∼
=HY (A) for all A. For example,

Each A gives us some information on how X and Y are similar, but the whole natural isomorphism is strong enough to imply X∼=Y .

3.
Consider the functor U : Grp Set. It can be represented by H, in the sense that
HomGrp (ℤ,G ) ∼= U (G)    by     ϕ ↦→  ϕ(1).

That is, elements of G are in bijection with maps G, determined by the image of +1 (or 1 if you prefer). So a representation of U was determined by looking at and picking +1 U().

The generalization of this is a follows: let 𝒜 be a category and X : 𝒜→Set a covariant functor. Then a representation HA∼=X consists of an object A ∈𝒜 and an element u X(A) satisfying a certain condition. You can read this off the condition1 1 Just for completeness, the condition is: For all A′∈𝒜and x X(A), there’s a unique f : A Awith (Xf)(u) = x. if you know what the inverse map is in ?? . In the above situation, X = U, A = and u = ±1.

61.7  A few harder problems to think about

Problem 61A. Show that the two definitions of natural transformation (one in terms of 𝒜×2 →ℬ and one in terms of arrows F(A)−α−→AG(A)) are equivalent.

Problem 61B. Let 𝒜 be the category of finite sets whose arrows are bijections between sets. For A ∈𝒜, let F(A) be the set of permutations of A and let G(A) be the set of orderings on A.2

(a)
Extend F and G to functors 𝒜→Set.
(b)
Show that F(A)∼=G(A) for every A, but this isomorphism is not natural.

Problem 61C (Proving the Yoneda lemma). In the context of ?? :

(a)
Prove that the map described is in fact a bijection. (To do this, you will probably have to explicitly write down the inverse map.)
(b)
    PICPICProve that the bijection is indeed natural. (This is long-winded, but not difficult; from start to finish, there is only one thing you can possibly do.)

62  Limits in categories (TO DO)

We saw near the start of our category theory chapter the nice construction of products by drawing a bunch of arrows. It turns out that this concept can be generalized immensely, and I want to give a you taste of that here.

To run this chapter, we follow the approach of [?].

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

62.1  Equalizers

Prototypical example for this section: The equalizer of f,g : X Y is the set of points with f(x) = g(x).

Given two sets X and Y , and maps X f,g
−−→Y , we define their equalizer to be

{x ∈ X | f(x) = g(x)} .

We would like a categorical way of defining this, too.

Consider two objects X and Y with two maps f and g between them. Stealing a page from [?], we call this a fork:        f
X  --------- Y
       g
A cone over this fork is an object A and arrows over X and Y which make the diagram commute, like so.  A - --f∘q=g∘q- --
 |
 |
q|
 |
 | ----f-----
 X ----------Y
       g
Effectively, the arrow over Y is just forcing f q = g q. In any case, the equalizer of f and g is a “universal cone” over this fork: it is an object E and a map Ee−→X such that for each Aq
−→X the diagram          --- ----- ---- -
      A|
       |
      !∃h
       |
       |
      E  --- -

   --e-f---q
X  ----------Y------
       g
commutes for a unique A h
−→E. In other words, any map Aq
−→X as above must factor uniquely through E. Again, the dotted arrows can be omitted, and as before equalizers may not exist. But when they do exist:

Exercise 62.1.1. If E e
−→X and E e′
−→X are equalizers, show that E∼=E.

Example 62.1.2 (Examples of equalizers)

(a)
In Set, given X f,g
−−→Y the equalizer E can be realized as E = {xf(x) = g(x)}, with the inclusion e : E`→X as the morphism. As usual, by abuse we’ll often just refer to E as the equalizer.
(b)
Ditto in Top, Grp. One has to check that the appropriate structures are preserved (e.g. one should check that {ϕ(g) = ψ(g)g G} is a group).
(c)
In particular, given a homomorphism ϕ : G H, the inclusion kerϕ`→G is an equalizer for the fork G H by ϕ and the trivial homomorphism.

According to (c) equalizers let us get at the concept of a kernel if there is a distinguished “trivial map”, like the trivial homomorphism in Grp. We’ll flesh this idea out in the chapter on abelian categories.

62.2  Pullback squares (TO DO)

SVG-Viewer needed.

SVG-Viewer needed.

SVG-Viewer needed.

Great example: differentiable functions on (3,1) and (1,3)

Example 62.2.1

62.3  Limits

We’ve defined cones over discrete sets of Xi and over forks. It turns out you can also define a cone over any general diagram of objects and arrows; we specify a projection from A to each object and require that the projections from A commute with the arrows in the diagram. (For example, a cone over a fork is a diagram with two edges and two arrows.) If you then demand the cone be universal, you have the extremely general definition of a limit. As always, these are unique up to unique isomorphism. We can also define the dual notion of a colimit in the same way.

62.4  A few harder problems to think about

Problem 62A (Equalizers are monic). Show that the equalizer of any fork is monic.

pushout square gives tenor product

p-adic

relative Chinese remainder theorem!!

63  Abelian categories

In this chapter I’ll translate some more familiar concepts into categorical language; this will require some additional assumptions about our category, culminating in the definition of a so-called “abelian category”. Once that’s done, I’ll be able to tell you what this “diagram chasing” thing is all about.

Throughout this chapter, “`→” will be used for monic maps and “” for epic maps.

63.1  Zero objects, kernels, cokernels, and images

Prototypical example for this section: In Grp, the trivial group and homomorphism are the zero objects and morphisms. If G, H are abelian then the cokernel of ϕ : G H is H∕imϕ.

A zero object of a category is an object 0 which is both initial and terminal; of course, it’s unique up to unique isomorphism. For example, in Grp the zero object is the trivial group, in Vectk it’s the zero-dimensional vector space consisting of one point, and so on.

Question 63.1.1. Show that Set and Top don’t have zero objects.

For the rest of this chapter, all categories will have zero objects.

In a category 𝒜 with zero objects, any two objects A and B thus have a distinguished morphism

A →  0 → B

which is called the zero morphism and also denoted 0. For example, in Grp this is the trivial homomorphism.

We can now define:

Definition 63.1.2. Consider a map Af−→B. The kernel is defined as the equalizer of this map and the map A 0
−→B. Thus, it’s a map kerf : Kerf`→A such that             0
  Kerf --- ---- ---
    ∩
kerf|
    |
    |
   A  --------- B
          f
commutes, and moreover any other map with the same property factors uniquely through KerA (so it is universal with this property). By ?? , kerf is a monic morphism, which justifies the use of “`→”.

Notice that we’re using kerf to represent the map and Kerf to represent the object Similarly, we define the cokernel, the dual notion:

Definition 63.1.3. Consider a map Af
−→B. The cokernel of f is a map cokerf : B Cokerf such that A ----f-----B|--
         0   |
             |
             |cokerf


          Cokerf
commutes, and moreover any other map with the same property factors uniquely through Cokerf (so it is universal with this property). Thus it is the “coequalizer” of this map and the map A 0
−→B. By the dual of ?? , cokerf is an epic morphism, which justifies the use of “”.

Think of the cokernel of a map Af−→B as “B modulo the image of f”, e.g.

Example 63.1.4 (Cokernels)
Consider the map 6D12 = ⟨r,s | r6 = s2 = 1,rs = sr−1⟩. Then the cokernel of this map in Grp is D12⟨r⟩∼
=2.

This doesn’t always work out quite the way we want since in general the image of a homomorphism need not be normal in the codomain. Nonetheless, we can use this to define:

Definition 63.1.5. The image of Af
−→B is the kernel of cokerf. We denote Imf = Ker(cokerf). This gives a unique map imf : A Imf.

When it exists, this coincides with our concrete notion of “image”. Picture:       f0          cokerf
A -----∃!---B------------




         ⊂---0-----
     Im f       Coker f
Note that by universality of Imf, we find that there is a unique map imf : A Imf that makes the entire diagram commute.

63.2  Additive and abelian categories

Prototypical example for this section: Ab, Vectk, or more generally ModR.

We can now define the notion of an additive and abelian category, which are the types of categories where this notion is most useful.

Definition 63.2.1. An additive category 𝒜 is one such that:

Definition 63.2.2. An abelian category 𝒜 is one with the additional properties that for any morphism Af
−→B,

So, this yields a diagram        ⊂-ker(f)-   --im-(f)-        ⊂-------   coker(f)
Ker(f)          A          Im(f)         B         Coker(f).

Example 63.2.3 (Examples of abelian categories)

(a)
Vectk, Ab are abelian categories, where f + g takes its usual meaning.
(b)
Generalizing this, the category ModR of R-modules is abelian.
(c)
Grp is not even additive, because there is no way to assign a commutative addition to pairs of morphisms.

In general, once you assume a category is abelian, all the properties you would want of these kernels, cokernels, … that you would guess hold true. For example,

Proposition 63.2.4 (Monic trival kernel)
A map Af
−→B is monic if and only if its kernel is 0 A. Dually, A f
−→B is epic if and only if its cokernel is B 0.

Proof. The easy direction is:

Exercise 63.2.5. Show that if Af
−→B is monic, then 0 A is a kernel. (This holds even in non-abelian categories.)

Of course, since kernels are unique up to isomorphism, monic =⇒ 0 kernel. On the other hand, assume that 0 A is a kernel of Af
−→B. For this we can exploit the group structure of the underlying homomorphisms now. Assume the diagram   -----g----   ----f----
Z ----h-----A            B
commutes. Then (g h) f = g f hf = 0, and we’ve arrived at a commutative diagram.             0
   Z|................................
    |
g−h |
    |
    |
   A --------- B
          f
But since 0 A is a kernel it follows that g h factors through 0, so g h = 0 =⇒ g = h, which is to say that f is monic. □

Proposition 63.2.6 (Isomorphism monic and epic)
In an abelian category, a map is an isomorphism if and only if it is monic and epic.

Proof. Omitted. (The Mitchell embedding theorem presented later implies this anyways for most situations we care about, by looking at a small sub-category.) □

63.3  Exact sequences

Prototypical example for this section: 0 G G × H H 0 is exact.

Exact sequences will seem exceedingly unmotivated until you learn about homology groups, which is one of the most natural places that exact sequences appear. In light of this, it might be worth trying to read the chapter on homology groups simultaneously with this one.

First, let me state the definition for groups, to motivate the general categorical definition. A sequence of groups

    f1    f2     f3    fn
G0 −→  G1 −→ G2 −→  ...−→ Gn

is exact at Gk if the image of fk is the kernel of fk+1. We say the entire sequence is exact if it’s exact at k = 1,,n 1.

Example 63.3.1 (Exact sequences)

(a)
The sequence
          ×5
0 → ℤ ∕3ℤ `→  ℤ∕15ℤ ↠  ℤ∕5ℤ →  0

is exact. Actually, 0 G`→G × H H 0 is exact in general. (Here 0 denotes the trivial group.)

(b)
For groups, the map 0 A B is exact if and only if A B is injective.
(c)
For groups, the map A B 0 is exact if and only if A B is surjective.

Now, we want to mimic this definition in a general abelian category 𝒜. So, let’s write down a criterion for when Af−→B−→gC is exact. First, we had better have that g f = 0, which encodes the fact that im(f) ker(g). Adding in all the relevant objects, we get the commutative diagram below.           f     0
   A|--------------- ---- - C
    |                        |
    |             -----g-----|
imf |          B           ⊃ 0
                             |
  Im  f⊂----........................    Ker g
           ι    ∃!
Here the map A Imf is epic since we are assuming 𝒜 is an abelian category. So, we have that

0 = (g ∘ι)∘ im f = g ∘ (ι ∘im f) = g ∘ f = 0

but since imf is epic, this means that g ι = 0. So there is a unique map Imf Kerg, and we require that this diagram commutes. In short,

Definition 63.3.2. Let 𝒜 be an abelian category. The sequence

           fn     fn+1
⋅⋅⋅ → An−1 −→ An  −− −→  An+1 →  ...

is exact at An if fn fn+1 = 0 and the canonical map Imfn Kerfn+1 is an isomorphism. The entire sequence is exact if it is exact at each Ai. (For finite sequences we don’t impose condition on the very first and very last object.)

Exercise 63.3.3. Show that, as before, 0 A B is exact A B is monic.

63.4  The Freyd-Mitchell embedding theorem

We now introduce the Freyd-Mitchell embedding theorem, which essentially says that any abelian category can be realized as a concrete one.

Definition 63.4.1. A category is small if obj(𝒜) is a set (as opposed to a class), i.e. there is a “set of all objects in 𝒜”. For example, Set is not small because there is no set of all sets.

Theorem 63.4.2 (Freyd-Mitchell embedding theorem)
Let 𝒜 be a small abelian category. Then there exists a ring R (with 1 but possibly non-commutative) and a full, faithful, exact functor onto the category of left R-modules.

Here a functor is exact if it preserves exact sequences. This theorem is good because it means

You can basically forget about all the weird definitions that work in any abelian category.

Any time you’re faced with a statement about an abelian category, it suffices to just prove it for a “concrete” category where injective/surjective/kernel/image/exact/etc. agree with your previous notions. A proof by this means is sometimes called diagram chasing.

Remark 63.4.3 — The “small” condition is a technical obstruction that requires the objects 𝒜 to actually form a set. I’ll ignore this distinction, because one can almost always work around it by doing enough set-theoretic technicalities.

For example, let’s prove:

Lemma 63.4.4 (Short five lemma)
In an abelian category, consider the commutative diagram 0 ----------A ⊂----p---- B ----q---  C ---------- 0
             |           |            |
             |           |            |
           ∼= α           |β         ∼= γ
             |           |            |
             |    p′     |     q′     |
0 --------- A′ ⊂------- B ′--------  C′ --------- 0
and assume the top and bottom rows are exact. If α and γ are isomorphisms, then so is β.

Proof. We prove that β is epic (with a similar proof to get monic). By the embedding theorem we can treat the category as R-modules over some R. This lets us do a so-called “diagram chase” where we move elements around the picture, using the concrete interpretation of our category as R-modules.

Let bbe an element of B. Then q(b) C, and since γ is surjective, we have a c such that γ(c) = b, and finally a b B such that q(b) = c. Picture: b ∈ B ---q-- c ∈ C
   |           |
   |           |
   |β         ∼=|γ
   |           |
 ′    ′| q′   ′ |  ′
b ∈ B  -----c ∈ C
Now, it is not necessarily the case that β(b) = b. However, since the diagram commutes we at least have that

q′(b′) = q′(β (b))

so b′−β(b) Kerq= Imp, and there is an a′∈ Asuch that p(a) = b′−β(b); use α now to lift it to a A. Picture: a ∈|A          b ∈ B
   |
   |
   |
   |
a′ ∈ A ′---b′ − β (b) ∈ B′----0 ∈ C ′
Then, we have

                             ′           ′         ′
β(b+ q(a)) = βb + βpa = βb + p αa = βb + (b− βb) = b

so b′∈ Imβ which completes the proof that βis surjective. □

63.5  Breaking long exact sequences

Prototypical example for this section: First isomorphism theorem.

In fact, it turns out that any exact sequence breaks into short exact sequences. This relies on:

Proposition 63.5.1 (“First isomorphism theorem” in abelian categories)
Let A−→fB be an arrow of an abelian category. Then there is an exact sequence

          kerf    im f
0 → Ker f −−−→ A −− → Im f →  0.

Example 63.5.2
Let’s analyze this theorem in our two examples of abelian categories:

(a)
In the category of abelian groups, this is basically the first isomorphism theorem.
(b)
In the category Vectk, this amounts to the rank-nullity theorem, ?? .

Thus, any exact sequence can be broken into short exact sequences, as              0 ...........      0           0 ...........      0

                      .....................                    .......................
                   Cn                      Cn+2

..............-- An −1.....f..n...−1-- An .........fn.-- An+1 .....f..n..+1-- ...


     Cn −1...................               Cn+1...................


 0 ..........     0            0 ..........     0
where Ck = imfk1 = kerfk for every k.

63.6  A few harder problems to think about

Problem 63A (Four lemma). In an abelian category, consider the commutative diagram        p           q            r
A| --------- B∩ --------- C| --------- D∩
 |           |            |           |
 |           |            |           |
 |α          |β           |γ          |δ
                          |
A′ ---------B ′--------- C ′---------D ′
       p′           q′           r′
where the first and second rows are exact. Prove that if α is epic, and β and δ are monic, then γ is epic.

Solution. Let c C with γ(c) = 0. We show c = 0. This proceeds in a diagram chase:

¡++¿

Problem 63B (Five lemma).       PICIn an abelian category, consider the commutative diagram        p           q            r           s
A| --------- B --------- C| --------- D --------- E
 |           |            |           |            ∩
 |           |            |           |            |
 |α         ∼=|β           |γ         ∼=|δ           |𝜀
             |            |           |            |
 ′ ---------  ′---------   ′---------  ′---------   ′
A      p′    B      q′    C      r′    D      s′    E
where the first and second rows are exact. Prove that if α is epic, 𝜀 is monic, and β, δ are isomorphisms, then γ is an isomorphism as well. Thus this is a stronger version of the short five lemma.

Problem 63C (Snake lemma).     PICPICIn an abelian category, consider the diagram